00:00:00.000 Started by upstream project "autotest-nightly" build number 3884 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3264 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.139 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.175 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.219 Using shallow fetch with depth 1 00:00:00.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.219 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.504 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.515 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.526 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.526 > git config core.sparsecheckout # timeout=10 00:00:08.536 > git read-tree -mu HEAD # timeout=10 00:00:08.556 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.575 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.575 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.666 [Pipeline] Start of Pipeline 00:00:08.680 [Pipeline] library 00:00:08.681 Loading library shm_lib@master 00:00:08.681 Library shm_lib@master is cached. Copying from home. 00:00:08.694 [Pipeline] node 00:00:08.703 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.705 [Pipeline] { 00:00:08.713 [Pipeline] catchError 00:00:08.714 [Pipeline] { 00:00:08.724 [Pipeline] wrap 00:00:08.733 [Pipeline] { 00:00:08.739 [Pipeline] stage 00:00:08.740 [Pipeline] { (Prologue) 00:00:08.756 [Pipeline] echo 00:00:08.757 Node: VM-host-SM9 00:00:08.762 [Pipeline] cleanWs 00:00:08.770 [WS-CLEANUP] Deleting project workspace... 00:00:08.770 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.775 [WS-CLEANUP] done 00:00:08.981 [Pipeline] setCustomBuildProperty 00:00:09.066 [Pipeline] httpRequest 00:00:09.085 [Pipeline] echo 00:00:09.086 Sorcerer 10.211.164.101 is alive 00:00:09.094 [Pipeline] httpRequest 00:00:09.098 HttpMethod: GET 00:00:09.099 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.099 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.115 Response Code: HTTP/1.1 200 OK 00:00:09.116 Success: Status code 200 is in the accepted range: 200,404 00:00:09.116 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:32.921 [Pipeline] sh 00:00:33.203 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:33.221 [Pipeline] httpRequest 00:00:33.250 [Pipeline] echo 00:00:33.252 Sorcerer 10.211.164.101 is alive 00:00:33.261 [Pipeline] httpRequest 00:00:33.266 HttpMethod: GET 00:00:33.266 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:33.267 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:33.278 Response Code: HTTP/1.1 200 OK 00:00:33.278 Success: Status code 200 is in the accepted range: 200,404 00:00:33.279 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:13.223 [Pipeline] sh 00:01:13.504 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:16.053 [Pipeline] sh 00:01:16.337 + git -C spdk log --oneline -n5 00:01:16.337 719d03c6a sock/uring: only register net impl if supported 00:01:16.337 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:16.337 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:16.337 6c7c1f57e accel: add sequence outstanding stat 00:01:16.337 3bc8e6a26 accel: add utility to put task 00:01:16.359 [Pipeline] writeFile 00:01:16.378 [Pipeline] sh 00:01:16.661 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.672 [Pipeline] sh 00:01:16.950 + cat autorun-spdk.conf 00:01:16.950 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.950 SPDK_TEST_NVMF=1 00:01:16.950 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.950 SPDK_TEST_URING=1 00:01:16.950 SPDK_TEST_VFIOUSER=1 00:01:16.950 SPDK_TEST_USDT=1 00:01:16.950 SPDK_RUN_ASAN=1 00:01:16.950 SPDK_RUN_UBSAN=1 00:01:16.950 NET_TYPE=virt 00:01:16.950 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.956 RUN_NIGHTLY=1 00:01:16.958 [Pipeline] } 00:01:16.972 [Pipeline] // stage 00:01:16.985 [Pipeline] stage 00:01:16.987 [Pipeline] { (Run VM) 00:01:16.999 [Pipeline] sh 00:01:17.275 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.275 + echo 'Start stage prepare_nvme.sh' 00:01:17.275 Start stage prepare_nvme.sh 00:01:17.275 + [[ -n 4 ]] 00:01:17.275 + disk_prefix=ex4 00:01:17.275 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:17.275 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:17.275 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:17.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.275 ++ SPDK_TEST_NVMF=1 00:01:17.275 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.275 ++ SPDK_TEST_URING=1 00:01:17.275 ++ SPDK_TEST_VFIOUSER=1 00:01:17.275 ++ SPDK_TEST_USDT=1 00:01:17.275 ++ SPDK_RUN_ASAN=1 00:01:17.275 ++ SPDK_RUN_UBSAN=1 00:01:17.275 ++ NET_TYPE=virt 00:01:17.275 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.275 ++ RUN_NIGHTLY=1 00:01:17.275 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:17.275 + nvme_files=() 00:01:17.275 + declare -A nvme_files 00:01:17.275 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.275 + nvme_files['nvme.img']=5G 00:01:17.275 + nvme_files['nvme-cmb.img']=5G 00:01:17.275 + nvme_files['nvme-multi0.img']=4G 00:01:17.275 + nvme_files['nvme-multi1.img']=4G 00:01:17.275 + nvme_files['nvme-multi2.img']=4G 00:01:17.275 + nvme_files['nvme-openstack.img']=8G 00:01:17.275 + nvme_files['nvme-zns.img']=5G 00:01:17.275 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.275 + (( SPDK_TEST_FTL == 1 )) 00:01:17.275 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.275 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.275 + for nvme in "${!nvme_files[@]}" 00:01:17.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:17.275 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.275 + for nvme in "${!nvme_files[@]}" 00:01:17.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:17.275 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.275 + for nvme in "${!nvme_files[@]}" 00:01:17.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:17.275 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.275 + for nvme in "${!nvme_files[@]}" 00:01:17.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:17.275 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.275 + for nvme in "${!nvme_files[@]}" 00:01:17.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:17.275 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.534 + for nvme in "${!nvme_files[@]}" 00:01:17.534 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:17.534 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.534 + for nvme in "${!nvme_files[@]}" 00:01:17.534 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:17.534 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.534 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:17.534 + echo 'End stage prepare_nvme.sh' 00:01:17.534 End stage prepare_nvme.sh 00:01:17.545 [Pipeline] sh 00:01:17.825 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.825 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:18.083 00:01:18.083 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:18.083 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:18.083 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.083 HELP=0 00:01:18.083 DRY_RUN=0 00:01:18.083 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:18.083 NVME_DISKS_TYPE=nvme,nvme, 00:01:18.083 NVME_AUTO_CREATE=0 00:01:18.083 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:18.083 NVME_CMB=,, 00:01:18.083 NVME_PMR=,, 00:01:18.083 NVME_ZNS=,, 00:01:18.083 NVME_MS=,, 00:01:18.083 NVME_FDP=,, 00:01:18.083 SPDK_VAGRANT_DISTRO=fedora38 00:01:18.083 SPDK_VAGRANT_VMCPU=10 00:01:18.083 SPDK_VAGRANT_VMRAM=12288 00:01:18.083 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.083 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:18.083 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.083 SPDK_OPENSTACK_NETWORK=0 00:01:18.083 VAGRANT_PACKAGE_BOX=0 00:01:18.083 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:18.083 FORCE_DISTRO=true 00:01:18.083 VAGRANT_BOX_VERSION= 00:01:18.083 EXTRA_VAGRANTFILES= 00:01:18.083 NIC_MODEL=e1000 00:01:18.083 00:01:18.083 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:18.083 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:21.367 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.626 ==> default: Creating image (snapshot of base box volume). 00:01:21.626 ==> default: Creating domain with the following settings... 00:01:21.626 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720839027_1b4830bfb894ea4aa9cd 00:01:21.626 ==> default: -- Domain type: kvm 00:01:21.626 ==> default: -- Cpus: 10 00:01:21.626 ==> default: -- Feature: acpi 00:01:21.626 ==> default: -- Feature: apic 00:01:21.626 ==> default: -- Feature: pae 00:01:21.626 ==> default: -- Memory: 12288M 00:01:21.626 ==> default: -- Memory Backing: hugepages: 00:01:21.626 ==> default: -- Management MAC: 00:01:21.626 ==> default: -- Loader: 00:01:21.626 ==> default: -- Nvram: 00:01:21.626 ==> default: -- Base box: spdk/fedora38 00:01:21.626 ==> default: -- Storage pool: default 00:01:21.626 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720839027_1b4830bfb894ea4aa9cd.img (20G) 00:01:21.626 ==> default: -- Volume Cache: default 00:01:21.626 ==> default: -- Kernel: 00:01:21.626 ==> default: -- Initrd: 00:01:21.626 ==> default: -- Graphics Type: vnc 00:01:21.626 ==> default: -- Graphics Port: -1 00:01:21.626 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.626 ==> default: -- Graphics Password: Not defined 00:01:21.626 ==> default: -- Video Type: cirrus 00:01:21.626 ==> default: -- Video VRAM: 9216 00:01:21.626 ==> default: -- Sound Type: 00:01:21.626 ==> default: -- Keymap: en-us 00:01:21.626 ==> default: -- TPM Path: 00:01:21.626 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.626 ==> default: -- Command line args: 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:21.626 ==> default: -> value=-drive, 00:01:21.626 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:21.626 ==> default: -> value=-drive, 00:01:21.626 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.626 ==> default: -> value=-drive, 00:01:21.626 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.626 ==> default: -> value=-drive, 00:01:21.626 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:21.626 ==> default: -> value=-device, 00:01:21.626 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.886 ==> default: Creating shared folders metadata... 00:01:21.886 ==> default: Starting domain. 00:01:22.823 ==> default: Waiting for domain to get an IP address... 00:01:40.918 ==> default: Waiting for SSH to become available... 00:01:40.918 ==> default: Configuring and enabling network interfaces... 00:01:43.451 default: SSH address: 192.168.121.76:22 00:01:43.451 default: SSH username: vagrant 00:01:43.451 default: SSH auth method: private key 00:01:45.985 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:54.100 ==> default: Mounting SSHFS shared folder... 00:01:55.036 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.036 ==> default: Checking Mount.. 00:01:56.412 ==> default: Folder Successfully Mounted! 00:01:56.412 ==> default: Running provisioner: file... 00:01:56.977 default: ~/.gitconfig => .gitconfig 00:01:57.236 00:01:57.236 SUCCESS! 00:01:57.236 00:01:57.236 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:57.236 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.236 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:57.236 00:01:57.244 [Pipeline] } 00:01:57.261 [Pipeline] // stage 00:01:57.269 [Pipeline] dir 00:01:57.269 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:57.271 [Pipeline] { 00:01:57.283 [Pipeline] catchError 00:01:57.284 [Pipeline] { 00:01:57.295 [Pipeline] sh 00:01:57.573 + vagrant ssh-config --host vagrant 00:01:57.573 + sed -ne /^Host/,$p 00:01:57.573 + tee ssh_conf 00:02:00.873 Host vagrant 00:02:00.873 HostName 192.168.121.76 00:02:00.873 User vagrant 00:02:00.873 Port 22 00:02:00.873 UserKnownHostsFile /dev/null 00:02:00.873 StrictHostKeyChecking no 00:02:00.873 PasswordAuthentication no 00:02:00.873 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:00.873 IdentitiesOnly yes 00:02:00.873 LogLevel FATAL 00:02:00.873 ForwardAgent yes 00:02:00.873 ForwardX11 yes 00:02:00.873 00:02:00.888 [Pipeline] withEnv 00:02:00.891 [Pipeline] { 00:02:00.907 [Pipeline] sh 00:02:01.186 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.186 source /etc/os-release 00:02:01.186 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.186 # Minimal, systemd-like check. 00:02:01.186 if [[ -e /.dockerenv ]]; then 00:02:01.186 # Clear garbage from the node's name: 00:02:01.186 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.186 # $HOSTNAME is the actual container id 00:02:01.186 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.186 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.186 # We can assume this is a mount from a host where container is running, 00:02:01.186 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.186 container="$(< /etc/hostname) ($agent)" 00:02:01.186 else 00:02:01.186 # Fallback 00:02:01.186 container=$agent 00:02:01.186 fi 00:02:01.186 fi 00:02:01.186 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.186 00:02:01.457 [Pipeline] } 00:02:01.476 [Pipeline] // withEnv 00:02:01.484 [Pipeline] setCustomBuildProperty 00:02:01.498 [Pipeline] stage 00:02:01.500 [Pipeline] { (Tests) 00:02:01.518 [Pipeline] sh 00:02:01.797 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:02.070 [Pipeline] sh 00:02:02.350 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.624 [Pipeline] timeout 00:02:02.624 Timeout set to expire in 30 min 00:02:02.626 [Pipeline] { 00:02:02.643 [Pipeline] sh 00:02:02.924 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.491 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:03.504 [Pipeline] sh 00:02:03.783 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:04.055 [Pipeline] sh 00:02:04.332 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.605 [Pipeline] sh 00:02:04.883 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:04.883 ++ readlink -f spdk_repo 00:02:04.883 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:04.883 + [[ -n /home/vagrant/spdk_repo ]] 00:02:04.883 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:04.883 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:04.883 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:04.883 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:04.883 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:04.883 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:04.883 + cd /home/vagrant/spdk_repo 00:02:04.883 + source /etc/os-release 00:02:04.883 ++ NAME='Fedora Linux' 00:02:04.883 ++ VERSION='38 (Cloud Edition)' 00:02:04.883 ++ ID=fedora 00:02:04.883 ++ VERSION_ID=38 00:02:04.883 ++ VERSION_CODENAME= 00:02:04.883 ++ PLATFORM_ID=platform:f38 00:02:04.883 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:04.883 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.883 ++ LOGO=fedora-logo-icon 00:02:04.883 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:04.883 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.883 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:04.883 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.883 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.883 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.883 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:04.883 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.883 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:04.883 ++ SUPPORT_END=2024-05-14 00:02:04.883 ++ VARIANT='Cloud Edition' 00:02:04.883 ++ VARIANT_ID=cloud 00:02:04.883 + uname -a 00:02:05.141 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:05.141 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:05.399 Hugepages 00:02:05.399 node hugesize free / total 00:02:05.399 node0 1048576kB 0 / 0 00:02:05.399 node0 2048kB 0 / 0 00:02:05.399 00:02:05.399 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.657 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.657 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.657 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:05.657 + rm -f /tmp/spdk-ld-path 00:02:05.657 + source autorun-spdk.conf 00:02:05.657 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.657 ++ SPDK_TEST_NVMF=1 00:02:05.657 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.657 ++ SPDK_TEST_URING=1 00:02:05.657 ++ SPDK_TEST_VFIOUSER=1 00:02:05.657 ++ SPDK_TEST_USDT=1 00:02:05.657 ++ SPDK_RUN_ASAN=1 00:02:05.657 ++ SPDK_RUN_UBSAN=1 00:02:05.657 ++ NET_TYPE=virt 00:02:05.657 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.657 ++ RUN_NIGHTLY=1 00:02:05.657 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.657 + [[ -n '' ]] 00:02:05.657 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.657 + for M in /var/spdk/build-*-manifest.txt 00:02:05.657 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.657 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.657 + for M in /var/spdk/build-*-manifest.txt 00:02:05.657 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.657 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.657 ++ uname 00:02:05.657 + [[ Linux == \L\i\n\u\x ]] 00:02:05.657 + sudo dmesg -T 00:02:05.657 + sudo dmesg --clear 00:02:05.657 + dmesg_pid=5155 00:02:05.657 + sudo dmesg -Tw 00:02:05.657 + [[ Fedora Linux == FreeBSD ]] 00:02:05.657 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.657 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.657 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.657 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.657 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.657 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.657 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.657 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.657 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.657 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.657 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.657 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.657 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.657 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.657 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.657 Test configuration: 00:02:05.657 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.657 SPDK_TEST_NVMF=1 00:02:05.657 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.657 SPDK_TEST_URING=1 00:02:05.657 SPDK_TEST_VFIOUSER=1 00:02:05.657 SPDK_TEST_USDT=1 00:02:05.657 SPDK_RUN_ASAN=1 00:02:05.657 SPDK_RUN_UBSAN=1 00:02:05.657 NET_TYPE=virt 00:02:05.657 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.657 RUN_NIGHTLY=1 02:51:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.657 02:51:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.657 02:51:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.657 02:51:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.657 02:51:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.657 02:51:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.657 02:51:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.657 02:51:12 -- paths/export.sh@5 -- $ export PATH 00:02:05.657 02:51:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.657 02:51:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.915 02:51:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:05.915 02:51:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720839072.XXXXXX 00:02:05.915 02:51:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720839072.dsGjRx 00:02:05.915 02:51:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:05.915 02:51:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:05.915 02:51:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:05.915 02:51:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.915 02:51:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.915 02:51:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:05.915 02:51:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:05.915 02:51:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.915 02:51:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:05.915 02:51:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:05.915 02:51:12 -- pm/common@17 -- $ local monitor 00:02:05.915 02:51:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.915 02:51:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.915 02:51:12 -- pm/common@25 -- $ sleep 1 00:02:05.915 02:51:12 -- pm/common@21 -- $ date +%s 00:02:05.915 02:51:12 -- pm/common@21 -- $ date +%s 00:02:05.915 02:51:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720839072 00:02:05.915 02:51:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720839072 00:02:05.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720839072_collect-vmstat.pm.log 00:02:05.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720839072_collect-cpu-load.pm.log 00:02:06.847 02:51:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:06.847 02:51:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.847 02:51:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.847 02:51:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:06.847 02:51:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.847 Sat Jul 13 02:51:13 AM UTC 2024 00:02:06.847 02:51:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.847 v24.09-pre-202-g719d03c6a 00:02:06.847 02:51:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:06.847 02:51:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:06.847 02:51:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.847 02:51:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.847 02:51:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.847 ************************************ 00:02:06.847 START TEST asan 00:02:06.847 ************************************ 00:02:06.847 02:51:13 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:06.847 using asan 00:02:06.847 00:02:06.847 real 0m0.000s 00:02:06.847 user 0m0.000s 00:02:06.847 sys 0m0.000s 00:02:06.847 02:51:13 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.847 02:51:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.847 ************************************ 00:02:06.847 END TEST asan 00:02:06.847 ************************************ 00:02:06.847 02:51:13 -- common/autotest_common.sh@1142 -- $ return 0 00:02:06.847 02:51:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.847 02:51:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.847 02:51:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.847 02:51:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.847 02:51:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.847 ************************************ 00:02:06.847 START TEST ubsan 00:02:06.847 ************************************ 00:02:06.847 using ubsan 00:02:06.847 02:51:13 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:06.847 00:02:06.847 real 0m0.000s 00:02:06.847 user 0m0.000s 00:02:06.847 sys 0m0.000s 00:02:06.847 02:51:13 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.847 02:51:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.847 ************************************ 00:02:06.847 END TEST ubsan 00:02:06.847 ************************************ 00:02:06.847 02:51:13 -- common/autotest_common.sh@1142 -- $ return 0 00:02:06.847 02:51:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.847 02:51:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.847 02:51:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.847 02:51:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:07.106 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:07.106 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.672 Using 'verbs' RDMA provider 00:02:23.490 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:35.698 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:35.698 Creating mk/config.mk...done. 00:02:35.698 Creating mk/cc.flags.mk...done. 00:02:35.698 Type 'make' to build. 00:02:35.698 02:51:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:35.698 02:51:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:35.698 02:51:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:35.698 02:51:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.698 ************************************ 00:02:35.698 START TEST make 00:02:35.698 ************************************ 00:02:35.698 02:51:40 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:35.698 make[1]: Nothing to be done for 'all'. 00:02:35.957 The Meson build system 00:02:35.957 Version: 1.3.1 00:02:35.957 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:35.957 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:35.957 Build type: native build 00:02:35.957 Project name: libvfio-user 00:02:35.957 Project version: 0.0.1 00:02:35.957 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:35.957 C linker for the host machine: cc ld.bfd 2.39-16 00:02:35.957 Host machine cpu family: x86_64 00:02:35.957 Host machine cpu: x86_64 00:02:35.957 Run-time dependency threads found: YES 00:02:35.957 Library dl found: YES 00:02:35.957 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:35.957 Run-time dependency json-c found: YES 0.17 00:02:35.957 Run-time dependency cmocka found: YES 1.1.7 00:02:35.957 Program pytest-3 found: NO 00:02:35.957 Program flake8 found: NO 00:02:35.957 Program misspell-fixer found: NO 00:02:35.957 Program restructuredtext-lint found: NO 00:02:35.957 Program valgrind found: YES (/usr/bin/valgrind) 00:02:35.957 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.957 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.957 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.957 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.957 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:35.957 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:35.957 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:35.957 Build targets in project: 8 00:02:35.957 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:35.957 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:35.957 00:02:35.957 libvfio-user 0.0.1 00:02:35.957 00:02:35.957 User defined options 00:02:35.957 buildtype : debug 00:02:35.957 default_library: shared 00:02:35.957 libdir : /usr/local/lib 00:02:35.957 00:02:35.957 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.524 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:36.524 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:36.524 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:36.524 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.524 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:36.524 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:36.524 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:36.524 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.524 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.524 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.524 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:36.782 [11/37] Compiling C object samples/null.p/null.c.o 00:02:36.782 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:36.782 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:36.782 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.782 [15/37] Compiling C object samples/server.p/server.c.o 00:02:36.782 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.782 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:36.782 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.782 [19/37] Compiling C object samples/client.p/client.c.o 00:02:36.782 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:36.782 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:36.782 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.782 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.782 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:36.782 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:36.782 [26/37] Linking target samples/client 00:02:36.782 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:36.782 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.782 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:37.040 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.040 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.040 [32/37] Linking target test/unit_tests 00:02:37.040 [33/37] Linking target samples/server 00:02:37.040 [34/37] Linking target samples/null 00:02:37.040 [35/37] Linking target samples/lspci 00:02:37.040 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:37.040 [37/37] Linking target samples/gpio-pci-idio-16 00:02:37.040 INFO: autodetecting backend as ninja 00:02:37.040 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:37.040 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:37.609 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:37.609 ninja: no work to do. 00:02:45.720 The Meson build system 00:02:45.720 Version: 1.3.1 00:02:45.720 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.720 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.720 Build type: native build 00:02:45.720 Program cat found: YES (/usr/bin/cat) 00:02:45.720 Project name: DPDK 00:02:45.720 Project version: 24.03.0 00:02:45.720 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.720 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.720 Host machine cpu family: x86_64 00:02:45.720 Host machine cpu: x86_64 00:02:45.720 Message: ## Building in Developer Mode ## 00:02:45.720 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.720 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.720 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.720 Program python3 found: YES (/usr/bin/python3) 00:02:45.720 Program cat found: YES (/usr/bin/cat) 00:02:45.720 Compiler for C supports arguments -march=native: YES 00:02:45.720 Checking for size of "void *" : 8 00:02:45.720 Checking for size of "void *" : 8 (cached) 00:02:45.720 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:45.720 Library m found: YES 00:02:45.720 Library numa found: YES 00:02:45.720 Has header "numaif.h" : YES 00:02:45.720 Library fdt found: NO 00:02:45.720 Library execinfo found: NO 00:02:45.720 Has header "execinfo.h" : YES 00:02:45.720 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.720 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.720 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.720 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.720 Run-time dependency openssl found: YES 3.0.9 00:02:45.720 Run-time dependency libpcap found: YES 1.10.4 00:02:45.720 Has header "pcap.h" with dependency libpcap: YES 00:02:45.720 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.720 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.720 Compiler for C supports arguments -Wformat: YES 00:02:45.720 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.720 Compiler for C supports arguments -Wformat-security: NO 00:02:45.720 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.720 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.720 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.720 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.720 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.720 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.720 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.720 Compiler for C supports arguments -Wundef: YES 00:02:45.720 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.720 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.720 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.720 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.720 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.720 Program objdump found: YES (/usr/bin/objdump) 00:02:45.720 Compiler for C supports arguments -mavx512f: YES 00:02:45.720 Checking if "AVX512 checking" compiles: YES 00:02:45.720 Fetching value of define "__SSE4_2__" : 1 00:02:45.720 Fetching value of define "__AES__" : 1 00:02:45.720 Fetching value of define "__AVX__" : 1 00:02:45.720 Fetching value of define "__AVX2__" : 1 00:02:45.720 Fetching value of define "__AVX512BW__" : (undefined) 00:02:45.720 Fetching value of define "__AVX512CD__" : (undefined) 00:02:45.720 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:45.720 Fetching value of define "__AVX512F__" : (undefined) 00:02:45.720 Fetching value of define "__AVX512VL__" : (undefined) 00:02:45.720 Fetching value of define "__PCLMUL__" : 1 00:02:45.721 Fetching value of define "__RDRND__" : 1 00:02:45.721 Fetching value of define "__RDSEED__" : 1 00:02:45.721 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.721 Fetching value of define "__znver1__" : (undefined) 00:02:45.721 Fetching value of define "__znver2__" : (undefined) 00:02:45.721 Fetching value of define "__znver3__" : (undefined) 00:02:45.721 Fetching value of define "__znver4__" : (undefined) 00:02:45.721 Library asan found: YES 00:02:45.721 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.721 Message: lib/log: Defining dependency "log" 00:02:45.721 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.721 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.721 Library rt found: YES 00:02:45.721 Checking for function "getentropy" : NO 00:02:45.721 Message: lib/eal: Defining dependency "eal" 00:02:45.721 Message: lib/ring: Defining dependency "ring" 00:02:45.721 Message: lib/rcu: Defining dependency "rcu" 00:02:45.721 Message: lib/mempool: Defining dependency "mempool" 00:02:45.721 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.721 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.721 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:45.721 Compiler for C supports arguments -mpclmul: YES 00:02:45.721 Compiler for C supports arguments -maes: YES 00:02:45.721 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.721 Compiler for C supports arguments -mavx512bw: YES 00:02:45.721 Compiler for C supports arguments -mavx512dq: YES 00:02:45.721 Compiler for C supports arguments -mavx512vl: YES 00:02:45.721 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.721 Compiler for C supports arguments -mavx2: YES 00:02:45.721 Compiler for C supports arguments -mavx: YES 00:02:45.721 Message: lib/net: Defining dependency "net" 00:02:45.721 Message: lib/meter: Defining dependency "meter" 00:02:45.721 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.721 Message: lib/pci: Defining dependency "pci" 00:02:45.721 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.721 Message: lib/hash: Defining dependency "hash" 00:02:45.721 Message: lib/timer: Defining dependency "timer" 00:02:45.721 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.721 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.721 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.721 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.721 Message: lib/power: Defining dependency "power" 00:02:45.721 Message: lib/reorder: Defining dependency "reorder" 00:02:45.721 Message: lib/security: Defining dependency "security" 00:02:45.721 Has header "linux/userfaultfd.h" : YES 00:02:45.721 Has header "linux/vduse.h" : YES 00:02:45.721 Message: lib/vhost: Defining dependency "vhost" 00:02:45.721 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.721 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.721 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.721 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.721 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.721 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.721 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.721 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.721 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.721 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.721 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.721 Configuring doxy-api-html.conf using configuration 00:02:45.721 Configuring doxy-api-man.conf using configuration 00:02:45.721 Program mandb found: YES (/usr/bin/mandb) 00:02:45.721 Program sphinx-build found: NO 00:02:45.721 Configuring rte_build_config.h using configuration 00:02:45.721 Message: 00:02:45.721 ================= 00:02:45.721 Applications Enabled 00:02:45.721 ================= 00:02:45.721 00:02:45.721 apps: 00:02:45.721 00:02:45.721 00:02:45.721 Message: 00:02:45.721 ================= 00:02:45.721 Libraries Enabled 00:02:45.721 ================= 00:02:45.721 00:02:45.721 libs: 00:02:45.721 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.721 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.721 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.721 00:02:45.721 Message: 00:02:45.721 =============== 00:02:45.721 Drivers Enabled 00:02:45.721 =============== 00:02:45.721 00:02:45.721 common: 00:02:45.721 00:02:45.721 bus: 00:02:45.721 pci, vdev, 00:02:45.721 mempool: 00:02:45.721 ring, 00:02:45.721 dma: 00:02:45.721 00:02:45.721 net: 00:02:45.721 00:02:45.721 crypto: 00:02:45.721 00:02:45.721 compress: 00:02:45.721 00:02:45.721 vdpa: 00:02:45.721 00:02:45.721 00:02:45.721 Message: 00:02:45.721 ================= 00:02:45.721 Content Skipped 00:02:45.721 ================= 00:02:45.721 00:02:45.721 apps: 00:02:45.721 dumpcap: explicitly disabled via build config 00:02:45.721 graph: explicitly disabled via build config 00:02:45.721 pdump: explicitly disabled via build config 00:02:45.721 proc-info: explicitly disabled via build config 00:02:45.721 test-acl: explicitly disabled via build config 00:02:45.721 test-bbdev: explicitly disabled via build config 00:02:45.721 test-cmdline: explicitly disabled via build config 00:02:45.721 test-compress-perf: explicitly disabled via build config 00:02:45.721 test-crypto-perf: explicitly disabled via build config 00:02:45.721 test-dma-perf: explicitly disabled via build config 00:02:45.721 test-eventdev: explicitly disabled via build config 00:02:45.721 test-fib: explicitly disabled via build config 00:02:45.721 test-flow-perf: explicitly disabled via build config 00:02:45.721 test-gpudev: explicitly disabled via build config 00:02:45.721 test-mldev: explicitly disabled via build config 00:02:45.721 test-pipeline: explicitly disabled via build config 00:02:45.721 test-pmd: explicitly disabled via build config 00:02:45.721 test-regex: explicitly disabled via build config 00:02:45.721 test-sad: explicitly disabled via build config 00:02:45.721 test-security-perf: explicitly disabled via build config 00:02:45.721 00:02:45.721 libs: 00:02:45.721 argparse: explicitly disabled via build config 00:02:45.721 metrics: explicitly disabled via build config 00:02:45.721 acl: explicitly disabled via build config 00:02:45.721 bbdev: explicitly disabled via build config 00:02:45.721 bitratestats: explicitly disabled via build config 00:02:45.721 bpf: explicitly disabled via build config 00:02:45.721 cfgfile: explicitly disabled via build config 00:02:45.721 distributor: explicitly disabled via build config 00:02:45.721 efd: explicitly disabled via build config 00:02:45.721 eventdev: explicitly disabled via build config 00:02:45.721 dispatcher: explicitly disabled via build config 00:02:45.721 gpudev: explicitly disabled via build config 00:02:45.721 gro: explicitly disabled via build config 00:02:45.721 gso: explicitly disabled via build config 00:02:45.721 ip_frag: explicitly disabled via build config 00:02:45.721 jobstats: explicitly disabled via build config 00:02:45.721 latencystats: explicitly disabled via build config 00:02:45.721 lpm: explicitly disabled via build config 00:02:45.721 member: explicitly disabled via build config 00:02:45.721 pcapng: explicitly disabled via build config 00:02:45.721 rawdev: explicitly disabled via build config 00:02:45.721 regexdev: explicitly disabled via build config 00:02:45.721 mldev: explicitly disabled via build config 00:02:45.721 rib: explicitly disabled via build config 00:02:45.721 sched: explicitly disabled via build config 00:02:45.722 stack: explicitly disabled via build config 00:02:45.722 ipsec: explicitly disabled via build config 00:02:45.722 pdcp: explicitly disabled via build config 00:02:45.722 fib: explicitly disabled via build config 00:02:45.722 port: explicitly disabled via build config 00:02:45.722 pdump: explicitly disabled via build config 00:02:45.722 table: explicitly disabled via build config 00:02:45.722 pipeline: explicitly disabled via build config 00:02:45.722 graph: explicitly disabled via build config 00:02:45.722 node: explicitly disabled via build config 00:02:45.722 00:02:45.722 drivers: 00:02:45.722 common/cpt: not in enabled drivers build config 00:02:45.722 common/dpaax: not in enabled drivers build config 00:02:45.722 common/iavf: not in enabled drivers build config 00:02:45.722 common/idpf: not in enabled drivers build config 00:02:45.722 common/ionic: not in enabled drivers build config 00:02:45.722 common/mvep: not in enabled drivers build config 00:02:45.722 common/octeontx: not in enabled drivers build config 00:02:45.722 bus/auxiliary: not in enabled drivers build config 00:02:45.722 bus/cdx: not in enabled drivers build config 00:02:45.722 bus/dpaa: not in enabled drivers build config 00:02:45.722 bus/fslmc: not in enabled drivers build config 00:02:45.722 bus/ifpga: not in enabled drivers build config 00:02:45.722 bus/platform: not in enabled drivers build config 00:02:45.722 bus/uacce: not in enabled drivers build config 00:02:45.722 bus/vmbus: not in enabled drivers build config 00:02:45.722 common/cnxk: not in enabled drivers build config 00:02:45.722 common/mlx5: not in enabled drivers build config 00:02:45.722 common/nfp: not in enabled drivers build config 00:02:45.722 common/nitrox: not in enabled drivers build config 00:02:45.722 common/qat: not in enabled drivers build config 00:02:45.722 common/sfc_efx: not in enabled drivers build config 00:02:45.722 mempool/bucket: not in enabled drivers build config 00:02:45.722 mempool/cnxk: not in enabled drivers build config 00:02:45.722 mempool/dpaa: not in enabled drivers build config 00:02:45.722 mempool/dpaa2: not in enabled drivers build config 00:02:45.722 mempool/octeontx: not in enabled drivers build config 00:02:45.722 mempool/stack: not in enabled drivers build config 00:02:45.722 dma/cnxk: not in enabled drivers build config 00:02:45.722 dma/dpaa: not in enabled drivers build config 00:02:45.722 dma/dpaa2: not in enabled drivers build config 00:02:45.722 dma/hisilicon: not in enabled drivers build config 00:02:45.722 dma/idxd: not in enabled drivers build config 00:02:45.722 dma/ioat: not in enabled drivers build config 00:02:45.722 dma/skeleton: not in enabled drivers build config 00:02:45.722 net/af_packet: not in enabled drivers build config 00:02:45.722 net/af_xdp: not in enabled drivers build config 00:02:45.722 net/ark: not in enabled drivers build config 00:02:45.722 net/atlantic: not in enabled drivers build config 00:02:45.722 net/avp: not in enabled drivers build config 00:02:45.722 net/axgbe: not in enabled drivers build config 00:02:45.722 net/bnx2x: not in enabled drivers build config 00:02:45.722 net/bnxt: not in enabled drivers build config 00:02:45.722 net/bonding: not in enabled drivers build config 00:02:45.722 net/cnxk: not in enabled drivers build config 00:02:45.722 net/cpfl: not in enabled drivers build config 00:02:45.722 net/cxgbe: not in enabled drivers build config 00:02:45.722 net/dpaa: not in enabled drivers build config 00:02:45.722 net/dpaa2: not in enabled drivers build config 00:02:45.722 net/e1000: not in enabled drivers build config 00:02:45.722 net/ena: not in enabled drivers build config 00:02:45.722 net/enetc: not in enabled drivers build config 00:02:45.722 net/enetfec: not in enabled drivers build config 00:02:45.722 net/enic: not in enabled drivers build config 00:02:45.722 net/failsafe: not in enabled drivers build config 00:02:45.722 net/fm10k: not in enabled drivers build config 00:02:45.722 net/gve: not in enabled drivers build config 00:02:45.722 net/hinic: not in enabled drivers build config 00:02:45.722 net/hns3: not in enabled drivers build config 00:02:45.722 net/i40e: not in enabled drivers build config 00:02:45.722 net/iavf: not in enabled drivers build config 00:02:45.722 net/ice: not in enabled drivers build config 00:02:45.722 net/idpf: not in enabled drivers build config 00:02:45.722 net/igc: not in enabled drivers build config 00:02:45.722 net/ionic: not in enabled drivers build config 00:02:45.722 net/ipn3ke: not in enabled drivers build config 00:02:45.722 net/ixgbe: not in enabled drivers build config 00:02:45.722 net/mana: not in enabled drivers build config 00:02:45.722 net/memif: not in enabled drivers build config 00:02:45.722 net/mlx4: not in enabled drivers build config 00:02:45.722 net/mlx5: not in enabled drivers build config 00:02:45.722 net/mvneta: not in enabled drivers build config 00:02:45.722 net/mvpp2: not in enabled drivers build config 00:02:45.722 net/netvsc: not in enabled drivers build config 00:02:45.722 net/nfb: not in enabled drivers build config 00:02:45.722 net/nfp: not in enabled drivers build config 00:02:45.722 net/ngbe: not in enabled drivers build config 00:02:45.722 net/null: not in enabled drivers build config 00:02:45.722 net/octeontx: not in enabled drivers build config 00:02:45.722 net/octeon_ep: not in enabled drivers build config 00:02:45.722 net/pcap: not in enabled drivers build config 00:02:45.722 net/pfe: not in enabled drivers build config 00:02:45.722 net/qede: not in enabled drivers build config 00:02:45.722 net/ring: not in enabled drivers build config 00:02:45.722 net/sfc: not in enabled drivers build config 00:02:45.722 net/softnic: not in enabled drivers build config 00:02:45.722 net/tap: not in enabled drivers build config 00:02:45.722 net/thunderx: not in enabled drivers build config 00:02:45.722 net/txgbe: not in enabled drivers build config 00:02:45.722 net/vdev_netvsc: not in enabled drivers build config 00:02:45.722 net/vhost: not in enabled drivers build config 00:02:45.722 net/virtio: not in enabled drivers build config 00:02:45.722 net/vmxnet3: not in enabled drivers build config 00:02:45.722 raw/*: missing internal dependency, "rawdev" 00:02:45.722 crypto/armv8: not in enabled drivers build config 00:02:45.722 crypto/bcmfs: not in enabled drivers build config 00:02:45.722 crypto/caam_jr: not in enabled drivers build config 00:02:45.722 crypto/ccp: not in enabled drivers build config 00:02:45.722 crypto/cnxk: not in enabled drivers build config 00:02:45.722 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.722 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.722 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.722 crypto/mlx5: not in enabled drivers build config 00:02:45.722 crypto/mvsam: not in enabled drivers build config 00:02:45.722 crypto/nitrox: not in enabled drivers build config 00:02:45.722 crypto/null: not in enabled drivers build config 00:02:45.722 crypto/octeontx: not in enabled drivers build config 00:02:45.722 crypto/openssl: not in enabled drivers build config 00:02:45.722 crypto/scheduler: not in enabled drivers build config 00:02:45.722 crypto/uadk: not in enabled drivers build config 00:02:45.722 crypto/virtio: not in enabled drivers build config 00:02:45.722 compress/isal: not in enabled drivers build config 00:02:45.722 compress/mlx5: not in enabled drivers build config 00:02:45.722 compress/nitrox: not in enabled drivers build config 00:02:45.722 compress/octeontx: not in enabled drivers build config 00:02:45.722 compress/zlib: not in enabled drivers build config 00:02:45.722 regex/*: missing internal dependency, "regexdev" 00:02:45.722 ml/*: missing internal dependency, "mldev" 00:02:45.722 vdpa/ifc: not in enabled drivers build config 00:02:45.722 vdpa/mlx5: not in enabled drivers build config 00:02:45.722 vdpa/nfp: not in enabled drivers build config 00:02:45.722 vdpa/sfc: not in enabled drivers build config 00:02:45.722 event/*: missing internal dependency, "eventdev" 00:02:45.722 baseband/*: missing internal dependency, "bbdev" 00:02:45.722 gpu/*: missing internal dependency, "gpudev" 00:02:45.722 00:02:45.722 00:02:45.722 Build targets in project: 85 00:02:45.722 00:02:45.722 DPDK 24.03.0 00:02:45.722 00:02:45.722 User defined options 00:02:45.722 buildtype : debug 00:02:45.722 default_library : shared 00:02:45.722 libdir : lib 00:02:45.722 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.722 b_sanitize : address 00:02:45.723 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:45.723 c_link_args : 00:02:45.723 cpu_instruction_set: native 00:02:45.723 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.723 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.723 enable_docs : false 00:02:45.723 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.723 enable_kmods : false 00:02:45.723 max_lcores : 128 00:02:45.723 tests : false 00:02:45.723 00:02:45.723 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.288 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.288 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.288 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.288 [3/268] Linking static target lib/librte_kvargs.a 00:02:46.288 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.288 [5/268] Linking static target lib/librte_log.a 00:02:46.288 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.851 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.851 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.851 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.108 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.108 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.108 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.108 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:47.108 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:47.365 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.365 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.365 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:47.365 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.365 [19/268] Linking static target lib/librte_telemetry.a 00:02:47.365 [20/268] Linking target lib/librte_log.so.24.1 00:02:47.622 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:47.622 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:47.880 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.881 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:48.138 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.138 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.138 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.138 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.138 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.138 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.138 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.395 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.395 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:48.395 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:48.395 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:48.653 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:48.653 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:48.910 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:48.910 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:48.910 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.167 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.167 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.167 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.167 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.167 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.424 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.424 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.424 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.681 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.681 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.939 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.939 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.939 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.939 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.196 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.196 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.454 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.454 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.454 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.454 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.712 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.712 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.712 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.712 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.969 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.969 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.227 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.484 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:51.484 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.484 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:51.484 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:51.484 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:51.742 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:51.742 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:51.742 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.000 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.000 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.000 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.258 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.258 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.258 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.258 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.514 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.774 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.774 [85/268] Linking static target lib/librte_eal.a 00:02:52.774 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.774 [87/268] Linking static target lib/librte_ring.a 00:02:53.339 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.339 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.339 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.339 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.339 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.339 [93/268] Linking static target lib/librte_mempool.a 00:02:53.339 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.339 [95/268] Linking static target lib/librte_rcu.a 00:02:53.339 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.597 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.855 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.855 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.855 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.483 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:54.483 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.483 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:54.483 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:54.747 [105/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.747 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.747 [107/268] Linking static target lib/librte_mbuf.a 00:02:54.747 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:54.747 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.747 [110/268] Linking static target lib/librte_net.a 00:02:54.747 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.005 [112/268] Linking static target lib/librte_meter.a 00:02:55.263 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.263 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.264 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.264 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.264 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:55.264 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.830 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.830 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.088 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.088 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.346 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.346 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.603 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.603 [126/268] Linking static target lib/librte_pci.a 00:02:56.603 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.603 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.603 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.603 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.861 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.861 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.861 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.861 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.861 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.861 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.117 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.117 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.117 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.117 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.117 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.117 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.117 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.117 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.375 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.375 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.375 [147/268] Linking static target lib/librte_cmdline.a 00:02:57.632 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.890 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.890 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.890 [151/268] Linking static target lib/librte_timer.a 00:02:58.148 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.148 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.148 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.148 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.148 [156/268] Linking static target lib/librte_ethdev.a 00:02:58.406 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.664 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.664 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.664 [160/268] Linking static target lib/librte_compressdev.a 00:02:58.923 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.923 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.923 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.181 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.181 [165/268] Linking static target lib/librte_hash.a 00:02:59.181 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.182 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.182 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.182 [169/268] Linking static target lib/librte_dmadev.a 00:02:59.182 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.441 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.699 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.699 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.699 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.958 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.216 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.216 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.216 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.216 [179/268] Linking static target lib/librte_cryptodev.a 00:03:00.216 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.216 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.475 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.475 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.475 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.041 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.041 [186/268] Linking static target lib/librte_power.a 00:03:01.041 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.041 [188/268] Linking static target lib/librte_reorder.a 00:03:01.041 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.299 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.299 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.299 [192/268] Linking static target lib/librte_security.a 00:03:01.299 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.556 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.556 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.814 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.814 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.072 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.072 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.329 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:02.330 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.330 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.330 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.587 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.587 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.846 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.846 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.104 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:03.104 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.104 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.104 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.104 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.362 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.362 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.362 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:03.362 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.362 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.362 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.362 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:03.362 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.362 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.620 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.620 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.620 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.620 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.620 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:03.878 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.136 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.136 [229/268] Linking target lib/librte_eal.so.24.1 00:03:04.394 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:04.394 [231/268] Linking target lib/librte_ring.so.24.1 00:03:04.395 [232/268] Linking target lib/librte_meter.so.24.1 00:03:04.395 [233/268] Linking target lib/librte_dmadev.so.24.1 00:03:04.395 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:04.395 [235/268] Linking target lib/librte_timer.so.24.1 00:03:04.395 [236/268] Linking target lib/librte_pci.so.24.1 00:03:04.653 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:04.653 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:04.653 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:04.653 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:04.653 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:04.653 [242/268] Linking target lib/librte_mempool.so.24.1 00:03:04.653 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:04.653 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:04.653 [245/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.653 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:04.653 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:04.912 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:04.912 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.912 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.912 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:04.912 [252/268] Linking target lib/librte_net.so.24.1 00:03:04.912 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:04.912 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:05.170 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:05.170 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:05.170 [257/268] Linking target lib/librte_hash.so.24.1 00:03:05.170 [258/268] Linking target lib/librte_security.so.24.1 00:03:05.170 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:05.428 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:06.012 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.012 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:06.279 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:06.279 [264/268] Linking target lib/librte_power.so.24.1 00:03:08.811 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.811 [266/268] Linking static target lib/librte_vhost.a 00:03:10.713 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.713 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:10.713 INFO: autodetecting backend as ninja 00:03:10.713 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:12.088 CC lib/ut/ut.o 00:03:12.088 CC lib/ut_mock/mock.o 00:03:12.088 CC lib/log/log.o 00:03:12.088 CC lib/log/log_flags.o 00:03:12.088 CC lib/log/log_deprecated.o 00:03:12.088 LIB libspdk_log.a 00:03:12.088 LIB libspdk_ut.a 00:03:12.088 LIB libspdk_ut_mock.a 00:03:12.088 SO libspdk_ut_mock.so.6.0 00:03:12.088 SO libspdk_ut.so.2.0 00:03:12.088 SO libspdk_log.so.7.0 00:03:12.088 SYMLINK libspdk_ut_mock.so 00:03:12.088 SYMLINK libspdk_ut.so 00:03:12.088 SYMLINK libspdk_log.so 00:03:12.346 CC lib/ioat/ioat.o 00:03:12.346 CC lib/dma/dma.o 00:03:12.346 CC lib/util/bit_array.o 00:03:12.346 CC lib/util/base64.o 00:03:12.346 CC lib/util/cpuset.o 00:03:12.346 CXX lib/trace_parser/trace.o 00:03:12.346 CC lib/util/crc16.o 00:03:12.346 CC lib/util/crc32c.o 00:03:12.346 CC lib/util/crc32.o 00:03:12.604 CC lib/util/crc32_ieee.o 00:03:12.604 CC lib/vfio_user/host/vfio_user_pci.o 00:03:12.604 CC lib/util/crc64.o 00:03:12.604 CC lib/util/dif.o 00:03:12.604 CC lib/util/fd.o 00:03:12.604 LIB libspdk_dma.a 00:03:12.604 CC lib/util/file.o 00:03:12.604 SO libspdk_dma.so.4.0 00:03:12.604 CC lib/util/hexlify.o 00:03:12.604 CC lib/util/iov.o 00:03:12.604 CC lib/util/math.o 00:03:12.862 SYMLINK libspdk_dma.so 00:03:12.862 CC lib/util/pipe.o 00:03:12.862 LIB libspdk_ioat.a 00:03:12.862 CC lib/util/strerror_tls.o 00:03:12.862 CC lib/util/string.o 00:03:12.862 SO libspdk_ioat.so.7.0 00:03:12.862 CC lib/vfio_user/host/vfio_user.o 00:03:12.862 CC lib/util/uuid.o 00:03:12.862 SYMLINK libspdk_ioat.so 00:03:12.862 CC lib/util/fd_group.o 00:03:12.862 CC lib/util/xor.o 00:03:12.862 CC lib/util/zipf.o 00:03:13.120 LIB libspdk_vfio_user.a 00:03:13.120 SO libspdk_vfio_user.so.5.0 00:03:13.378 SYMLINK libspdk_vfio_user.so 00:03:13.378 LIB libspdk_util.a 00:03:13.637 SO libspdk_util.so.9.1 00:03:13.637 LIB libspdk_trace_parser.a 00:03:13.637 SYMLINK libspdk_util.so 00:03:13.896 SO libspdk_trace_parser.so.5.0 00:03:13.896 SYMLINK libspdk_trace_parser.so 00:03:13.896 CC lib/conf/conf.o 00:03:13.896 CC lib/json/json_parse.o 00:03:13.896 CC lib/json/json_util.o 00:03:13.896 CC lib/json/json_write.o 00:03:13.896 CC lib/env_dpdk/env.o 00:03:13.896 CC lib/env_dpdk/memory.o 00:03:13.896 CC lib/vmd/vmd.o 00:03:13.896 CC lib/idxd/idxd.o 00:03:13.896 CC lib/rdma_provider/common.o 00:03:13.896 CC lib/rdma_utils/rdma_utils.o 00:03:14.155 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:14.155 CC lib/vmd/led.o 00:03:14.155 CC lib/env_dpdk/pci.o 00:03:14.155 LIB libspdk_conf.a 00:03:14.155 LIB libspdk_rdma_utils.a 00:03:14.155 SO libspdk_conf.so.6.0 00:03:14.155 SO libspdk_rdma_utils.so.1.0 00:03:14.155 LIB libspdk_json.a 00:03:14.414 SO libspdk_json.so.6.0 00:03:14.414 SYMLINK libspdk_conf.so 00:03:14.414 LIB libspdk_rdma_provider.a 00:03:14.414 SYMLINK libspdk_rdma_utils.so 00:03:14.414 CC lib/env_dpdk/init.o 00:03:14.414 CC lib/env_dpdk/threads.o 00:03:14.414 CC lib/idxd/idxd_user.o 00:03:14.414 SO libspdk_rdma_provider.so.6.0 00:03:14.414 SYMLINK libspdk_json.so 00:03:14.414 CC lib/env_dpdk/pci_ioat.o 00:03:14.414 SYMLINK libspdk_rdma_provider.so 00:03:14.414 CC lib/env_dpdk/pci_virtio.o 00:03:14.672 CC lib/env_dpdk/pci_vmd.o 00:03:14.672 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.672 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.672 CC lib/env_dpdk/pci_idxd.o 00:03:14.672 CC lib/env_dpdk/pci_event.o 00:03:14.672 CC lib/idxd/idxd_kernel.o 00:03:14.672 CC lib/env_dpdk/sigbus_handler.o 00:03:14.672 CC lib/env_dpdk/pci_dpdk.o 00:03:14.672 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:14.672 LIB libspdk_vmd.a 00:03:14.672 SO libspdk_vmd.so.6.0 00:03:14.930 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.930 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.930 SYMLINK libspdk_vmd.so 00:03:14.930 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:14.930 LIB libspdk_idxd.a 00:03:14.930 SO libspdk_idxd.so.12.0 00:03:14.930 SYMLINK libspdk_idxd.so 00:03:15.189 LIB libspdk_jsonrpc.a 00:03:15.189 SO libspdk_jsonrpc.so.6.0 00:03:15.189 SYMLINK libspdk_jsonrpc.so 00:03:15.448 CC lib/rpc/rpc.o 00:03:15.707 LIB libspdk_rpc.a 00:03:15.707 SO libspdk_rpc.so.6.0 00:03:15.707 LIB libspdk_env_dpdk.a 00:03:15.707 SYMLINK libspdk_rpc.so 00:03:15.707 SO libspdk_env_dpdk.so.14.1 00:03:15.966 CC lib/notify/notify.o 00:03:15.966 CC lib/notify/notify_rpc.o 00:03:15.966 CC lib/keyring/keyring.o 00:03:15.966 CC lib/keyring/keyring_rpc.o 00:03:15.966 SYMLINK libspdk_env_dpdk.so 00:03:15.966 CC lib/trace/trace.o 00:03:15.966 CC lib/trace/trace_flags.o 00:03:15.966 CC lib/trace/trace_rpc.o 00:03:16.226 LIB libspdk_notify.a 00:03:16.226 SO libspdk_notify.so.6.0 00:03:16.226 SYMLINK libspdk_notify.so 00:03:16.226 LIB libspdk_keyring.a 00:03:16.226 SO libspdk_keyring.so.1.0 00:03:16.484 LIB libspdk_trace.a 00:03:16.484 SO libspdk_trace.so.10.0 00:03:16.484 SYMLINK libspdk_keyring.so 00:03:16.484 SYMLINK libspdk_trace.so 00:03:16.742 CC lib/sock/sock.o 00:03:16.742 CC lib/sock/sock_rpc.o 00:03:16.742 CC lib/thread/thread.o 00:03:16.742 CC lib/thread/iobuf.o 00:03:17.308 LIB libspdk_sock.a 00:03:17.308 SO libspdk_sock.so.10.0 00:03:17.308 SYMLINK libspdk_sock.so 00:03:17.875 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.875 CC lib/nvme/nvme_ctrlr.o 00:03:17.876 CC lib/nvme/nvme_ns_cmd.o 00:03:17.876 CC lib/nvme/nvme_fabric.o 00:03:17.876 CC lib/nvme/nvme_ns.o 00:03:17.876 CC lib/nvme/nvme_pcie.o 00:03:17.876 CC lib/nvme/nvme_pcie_common.o 00:03:17.876 CC lib/nvme/nvme_qpair.o 00:03:17.876 CC lib/nvme/nvme.o 00:03:18.442 CC lib/nvme/nvme_quirks.o 00:03:18.700 CC lib/nvme/nvme_transport.o 00:03:18.700 CC lib/nvme/nvme_discovery.o 00:03:18.700 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.700 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.700 CC lib/nvme/nvme_tcp.o 00:03:18.958 CC lib/nvme/nvme_opal.o 00:03:18.958 LIB libspdk_thread.a 00:03:18.958 SO libspdk_thread.so.10.1 00:03:18.958 CC lib/nvme/nvme_io_msg.o 00:03:19.215 SYMLINK libspdk_thread.so 00:03:19.215 CC lib/nvme/nvme_poll_group.o 00:03:19.215 CC lib/nvme/nvme_zns.o 00:03:19.215 CC lib/nvme/nvme_stubs.o 00:03:19.215 CC lib/nvme/nvme_auth.o 00:03:19.471 CC lib/nvme/nvme_cuse.o 00:03:19.471 CC lib/nvme/nvme_vfio_user.o 00:03:19.471 CC lib/nvme/nvme_rdma.o 00:03:20.056 CC lib/accel/accel.o 00:03:20.056 CC lib/blob/blobstore.o 00:03:20.056 CC lib/init/json_config.o 00:03:20.056 CC lib/virtio/virtio.o 00:03:20.056 CC lib/virtio/virtio_vhost_user.o 00:03:20.313 CC lib/init/subsystem.o 00:03:20.313 CC lib/init/subsystem_rpc.o 00:03:20.313 CC lib/virtio/virtio_vfio_user.o 00:03:20.570 CC lib/virtio/virtio_pci.o 00:03:20.570 CC lib/init/rpc.o 00:03:20.570 CC lib/blob/request.o 00:03:20.570 CC lib/blob/zeroes.o 00:03:20.570 CC lib/accel/accel_rpc.o 00:03:20.570 CC lib/accel/accel_sw.o 00:03:20.828 LIB libspdk_init.a 00:03:20.828 SO libspdk_init.so.5.0 00:03:20.828 CC lib/blob/blob_bs_dev.o 00:03:20.828 SYMLINK libspdk_init.so 00:03:20.828 LIB libspdk_virtio.a 00:03:20.828 CC lib/vfu_tgt/tgt_rpc.o 00:03:20.828 CC lib/vfu_tgt/tgt_endpoint.o 00:03:20.828 SO libspdk_virtio.so.7.0 00:03:21.084 CC lib/event/app.o 00:03:21.084 CC lib/event/reactor.o 00:03:21.084 SYMLINK libspdk_virtio.so 00:03:21.084 CC lib/event/log_rpc.o 00:03:21.084 CC lib/event/app_rpc.o 00:03:21.084 CC lib/event/scheduler_static.o 00:03:21.084 LIB libspdk_accel.a 00:03:21.341 SO libspdk_accel.so.15.1 00:03:21.341 LIB libspdk_nvme.a 00:03:21.341 SYMLINK libspdk_accel.so 00:03:21.341 LIB libspdk_vfu_tgt.a 00:03:21.341 SO libspdk_vfu_tgt.so.3.0 00:03:21.342 SYMLINK libspdk_vfu_tgt.so 00:03:21.342 SO libspdk_nvme.so.13.1 00:03:21.599 CC lib/bdev/bdev.o 00:03:21.599 CC lib/bdev/bdev_zone.o 00:03:21.599 CC lib/bdev/bdev_rpc.o 00:03:21.599 CC lib/bdev/scsi_nvme.o 00:03:21.599 CC lib/bdev/part.o 00:03:21.599 LIB libspdk_event.a 00:03:21.599 SO libspdk_event.so.14.0 00:03:21.856 SYMLINK libspdk_event.so 00:03:21.856 SYMLINK libspdk_nvme.so 00:03:24.388 LIB libspdk_blob.a 00:03:24.388 SO libspdk_blob.so.11.0 00:03:24.647 SYMLINK libspdk_blob.so 00:03:24.906 CC lib/lvol/lvol.o 00:03:24.906 CC lib/blobfs/blobfs.o 00:03:24.906 CC lib/blobfs/tree.o 00:03:24.906 LIB libspdk_bdev.a 00:03:25.165 SO libspdk_bdev.so.15.1 00:03:25.165 SYMLINK libspdk_bdev.so 00:03:25.424 CC lib/ftl/ftl_core.o 00:03:25.424 CC lib/ftl/ftl_init.o 00:03:25.424 CC lib/ftl/ftl_layout.o 00:03:25.424 CC lib/ftl/ftl_debug.o 00:03:25.424 CC lib/ublk/ublk.o 00:03:25.424 CC lib/scsi/dev.o 00:03:25.424 CC lib/nbd/nbd.o 00:03:25.424 CC lib/nvmf/ctrlr.o 00:03:25.682 CC lib/nbd/nbd_rpc.o 00:03:25.682 CC lib/ublk/ublk_rpc.o 00:03:25.682 CC lib/scsi/lun.o 00:03:25.940 CC lib/ftl/ftl_io.o 00:03:25.940 CC lib/scsi/port.o 00:03:25.940 CC lib/scsi/scsi.o 00:03:25.940 CC lib/nvmf/ctrlr_discovery.o 00:03:25.940 LIB libspdk_blobfs.a 00:03:25.940 LIB libspdk_nbd.a 00:03:25.940 SO libspdk_blobfs.so.10.0 00:03:25.940 SO libspdk_nbd.so.7.0 00:03:25.940 LIB libspdk_lvol.a 00:03:25.940 SO libspdk_lvol.so.10.0 00:03:25.940 SYMLINK libspdk_blobfs.so 00:03:25.940 SYMLINK libspdk_nbd.so 00:03:26.198 CC lib/scsi/scsi_bdev.o 00:03:26.198 CC lib/scsi/scsi_pr.o 00:03:26.198 CC lib/ftl/ftl_sb.o 00:03:26.198 CC lib/scsi/scsi_rpc.o 00:03:26.198 CC lib/scsi/task.o 00:03:26.198 SYMLINK libspdk_lvol.so 00:03:26.198 CC lib/nvmf/ctrlr_bdev.o 00:03:26.198 CC lib/ftl/ftl_l2p.o 00:03:26.198 LIB libspdk_ublk.a 00:03:26.198 CC lib/nvmf/subsystem.o 00:03:26.198 SO libspdk_ublk.so.3.0 00:03:26.198 CC lib/ftl/ftl_l2p_flat.o 00:03:26.198 SYMLINK libspdk_ublk.so 00:03:26.456 CC lib/ftl/ftl_nv_cache.o 00:03:26.456 CC lib/ftl/ftl_band.o 00:03:26.456 CC lib/ftl/ftl_band_ops.o 00:03:26.456 CC lib/ftl/ftl_writer.o 00:03:26.456 CC lib/nvmf/nvmf.o 00:03:26.456 CC lib/nvmf/nvmf_rpc.o 00:03:26.714 LIB libspdk_scsi.a 00:03:26.714 CC lib/nvmf/transport.o 00:03:26.714 CC lib/nvmf/tcp.o 00:03:26.714 SO libspdk_scsi.so.9.0 00:03:26.714 CC lib/ftl/ftl_rq.o 00:03:26.972 SYMLINK libspdk_scsi.so 00:03:26.972 CC lib/ftl/ftl_reloc.o 00:03:26.972 CC lib/nvmf/stubs.o 00:03:26.972 CC lib/nvmf/mdns_server.o 00:03:27.230 CC lib/nvmf/vfio_user.o 00:03:27.488 CC lib/nvmf/rdma.o 00:03:27.488 CC lib/ftl/ftl_l2p_cache.o 00:03:27.488 CC lib/nvmf/auth.o 00:03:27.746 CC lib/ftl/ftl_p2l.o 00:03:27.746 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.746 CC lib/iscsi/conn.o 00:03:27.746 CC lib/iscsi/init_grp.o 00:03:27.746 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.004 CC lib/iscsi/iscsi.o 00:03:28.004 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.004 CC lib/iscsi/md5.o 00:03:28.262 CC lib/vhost/vhost.o 00:03:28.262 CC lib/vhost/vhost_rpc.o 00:03:28.262 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.262 CC lib/vhost/vhost_scsi.o 00:03:28.520 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.520 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.520 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.778 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.778 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.778 CC lib/vhost/vhost_blk.o 00:03:28.778 CC lib/vhost/rte_vhost_user.o 00:03:29.036 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.036 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.036 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.294 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.294 CC lib/ftl/utils/ftl_conf.o 00:03:29.294 CC lib/ftl/utils/ftl_md.o 00:03:29.294 CC lib/iscsi/param.o 00:03:29.294 CC lib/iscsi/portal_grp.o 00:03:29.294 CC lib/iscsi/tgt_node.o 00:03:29.553 CC lib/iscsi/iscsi_subsystem.o 00:03:29.553 CC lib/ftl/utils/ftl_mempool.o 00:03:29.811 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.811 CC lib/iscsi/iscsi_rpc.o 00:03:29.811 CC lib/iscsi/task.o 00:03:29.811 CC lib/ftl/utils/ftl_property.o 00:03:29.811 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.811 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:30.069 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.069 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.069 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.069 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.328 LIB libspdk_vhost.a 00:03:30.328 CC lib/ftl/base/ftl_base_dev.o 00:03:30.328 LIB libspdk_iscsi.a 00:03:30.328 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.328 SO libspdk_vhost.so.8.0 00:03:30.328 LIB libspdk_nvmf.a 00:03:30.328 CC lib/ftl/ftl_trace.o 00:03:30.328 SO libspdk_iscsi.so.8.0 00:03:30.328 SYMLINK libspdk_vhost.so 00:03:30.586 SO libspdk_nvmf.so.18.1 00:03:30.586 SYMLINK libspdk_iscsi.so 00:03:30.586 LIB libspdk_ftl.a 00:03:30.845 SYMLINK libspdk_nvmf.so 00:03:30.845 SO libspdk_ftl.so.9.0 00:03:31.103 SYMLINK libspdk_ftl.so 00:03:31.669 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.669 CC module/vfu_device/vfu_virtio.o 00:03:31.669 CC module/sock/uring/uring.o 00:03:31.669 CC module/accel/error/accel_error.o 00:03:31.669 CC module/accel/ioat/accel_ioat.o 00:03:31.669 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.669 CC module/blob/bdev/blob_bdev.o 00:03:31.669 CC module/sock/posix/posix.o 00:03:31.669 CC module/keyring/file/keyring.o 00:03:31.669 CC module/keyring/linux/keyring.o 00:03:31.669 LIB libspdk_env_dpdk_rpc.a 00:03:31.669 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.669 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.669 CC module/keyring/linux/keyring_rpc.o 00:03:31.669 CC module/keyring/file/keyring_rpc.o 00:03:31.669 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.927 CC module/accel/error/accel_error_rpc.o 00:03:31.927 CC module/vfu_device/vfu_virtio_blk.o 00:03:31.927 LIB libspdk_scheduler_dynamic.a 00:03:31.927 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.927 LIB libspdk_keyring_linux.a 00:03:31.927 LIB libspdk_accel_ioat.a 00:03:31.927 LIB libspdk_blob_bdev.a 00:03:31.927 LIB libspdk_keyring_file.a 00:03:31.927 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.927 SO libspdk_keyring_linux.so.1.0 00:03:31.927 SO libspdk_blob_bdev.so.11.0 00:03:31.927 SO libspdk_accel_ioat.so.6.0 00:03:31.927 SO libspdk_keyring_file.so.1.0 00:03:31.927 LIB libspdk_accel_error.a 00:03:31.927 SO libspdk_accel_error.so.2.0 00:03:31.927 SYMLINK libspdk_keyring_linux.so 00:03:31.927 SYMLINK libspdk_blob_bdev.so 00:03:31.927 SYMLINK libspdk_accel_ioat.so 00:03:31.927 SYMLINK libspdk_keyring_file.so 00:03:32.196 SYMLINK libspdk_accel_error.so 00:03:32.196 CC module/vfu_device/vfu_virtio_scsi.o 00:03:32.196 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.196 CC module/vfu_device/vfu_virtio_rpc.o 00:03:32.196 CC module/accel/dsa/accel_dsa.o 00:03:32.196 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.196 CC module/accel/iaa/accel_iaa.o 00:03:32.196 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.478 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.478 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.478 CC module/bdev/delay/vbdev_delay.o 00:03:32.478 LIB libspdk_scheduler_gscheduler.a 00:03:32.478 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.478 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.478 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.478 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.478 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.478 LIB libspdk_vfu_device.a 00:03:32.478 LIB libspdk_accel_dsa.a 00:03:32.478 LIB libspdk_sock_posix.a 00:03:32.478 LIB libspdk_sock_uring.a 00:03:32.478 SO libspdk_vfu_device.so.3.0 00:03:32.478 SO libspdk_accel_dsa.so.5.0 00:03:32.735 SO libspdk_sock_uring.so.5.0 00:03:32.735 SO libspdk_sock_posix.so.6.0 00:03:32.735 LIB libspdk_accel_iaa.a 00:03:32.735 SO libspdk_accel_iaa.so.3.0 00:03:32.735 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.735 SYMLINK libspdk_accel_dsa.so 00:03:32.735 SYMLINK libspdk_vfu_device.so 00:03:32.736 SYMLINK libspdk_sock_uring.so 00:03:32.736 CC module/bdev/error/vbdev_error.o 00:03:32.736 CC module/bdev/gpt/gpt.o 00:03:32.736 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.736 SYMLINK libspdk_sock_posix.so 00:03:32.736 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.736 SYMLINK libspdk_accel_iaa.so 00:03:32.736 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.993 LIB libspdk_bdev_delay.a 00:03:32.993 CC module/bdev/null/bdev_null.o 00:03:32.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.993 CC module/bdev/malloc/bdev_malloc.o 00:03:32.993 SO libspdk_bdev_delay.so.6.0 00:03:32.993 CC module/bdev/nvme/bdev_nvme.o 00:03:32.993 SYMLINK libspdk_bdev_delay.so 00:03:32.993 LIB libspdk_bdev_error.a 00:03:32.993 LIB libspdk_bdev_gpt.a 00:03:32.993 SO libspdk_bdev_error.so.6.0 00:03:32.993 SO libspdk_bdev_gpt.so.6.0 00:03:32.993 LIB libspdk_blobfs_bdev.a 00:03:32.993 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.993 CC module/bdev/raid/bdev_raid.o 00:03:32.993 SO libspdk_blobfs_bdev.so.6.0 00:03:33.251 SYMLINK libspdk_bdev_error.so 00:03:33.251 CC module/bdev/split/vbdev_split.o 00:03:33.251 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.251 SYMLINK libspdk_bdev_gpt.so 00:03:33.251 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.251 CC module/bdev/null/bdev_null_rpc.o 00:03:33.251 SYMLINK libspdk_blobfs_bdev.so 00:03:33.251 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:33.251 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.508 CC module/bdev/raid/raid0.o 00:03:33.508 LIB libspdk_bdev_null.a 00:03:33.508 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.508 SO libspdk_bdev_null.so.6.0 00:03:33.508 LIB libspdk_bdev_split.a 00:03:33.508 LIB libspdk_bdev_passthru.a 00:03:33.508 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.508 SO libspdk_bdev_split.so.6.0 00:03:33.508 SO libspdk_bdev_passthru.so.6.0 00:03:33.508 SYMLINK libspdk_bdev_null.so 00:03:33.508 SYMLINK libspdk_bdev_passthru.so 00:03:33.508 SYMLINK libspdk_bdev_split.so 00:03:33.508 LIB libspdk_bdev_malloc.a 00:03:33.766 SO libspdk_bdev_malloc.so.6.0 00:03:33.766 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.766 CC module/bdev/raid/raid1.o 00:03:33.766 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.766 CC module/bdev/uring/bdev_uring.o 00:03:33.766 SYMLINK libspdk_bdev_malloc.so 00:03:33.766 CC module/bdev/nvme/nvme_rpc.o 00:03:33.766 CC module/bdev/aio/bdev_aio.o 00:03:33.766 CC module/bdev/ftl/bdev_ftl.o 00:03:34.024 LIB libspdk_bdev_lvol.a 00:03:34.024 SO libspdk_bdev_lvol.so.6.0 00:03:34.024 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.024 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.024 SYMLINK libspdk_bdev_lvol.so 00:03:34.024 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.024 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.024 CC module/bdev/nvme/vbdev_opal.o 00:03:34.282 CC module/bdev/uring/bdev_uring_rpc.o 00:03:34.282 LIB libspdk_bdev_ftl.a 00:03:34.282 LIB libspdk_bdev_zone_block.a 00:03:34.282 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.282 SO libspdk_bdev_ftl.so.6.0 00:03:34.282 LIB libspdk_bdev_aio.a 00:03:34.282 SO libspdk_bdev_zone_block.so.6.0 00:03:34.282 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.282 SO libspdk_bdev_aio.so.6.0 00:03:34.282 CC module/bdev/raid/concat.o 00:03:34.282 SYMLINK libspdk_bdev_ftl.so 00:03:34.282 LIB libspdk_bdev_uring.a 00:03:34.282 SYMLINK libspdk_bdev_zone_block.so 00:03:34.282 SYMLINK libspdk_bdev_aio.so 00:03:34.540 SO libspdk_bdev_uring.so.6.0 00:03:34.540 SYMLINK libspdk_bdev_uring.so 00:03:34.540 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.540 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.540 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.540 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.540 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.540 LIB libspdk_bdev_raid.a 00:03:34.798 SO libspdk_bdev_raid.so.6.0 00:03:34.798 SYMLINK libspdk_bdev_raid.so 00:03:35.055 LIB libspdk_bdev_iscsi.a 00:03:35.055 SO libspdk_bdev_iscsi.so.6.0 00:03:35.055 SYMLINK libspdk_bdev_iscsi.so 00:03:35.313 LIB libspdk_bdev_virtio.a 00:03:35.313 SO libspdk_bdev_virtio.so.6.0 00:03:35.313 SYMLINK libspdk_bdev_virtio.so 00:03:35.878 LIB libspdk_bdev_nvme.a 00:03:36.136 SO libspdk_bdev_nvme.so.7.0 00:03:36.136 SYMLINK libspdk_bdev_nvme.so 00:03:36.701 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.701 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.701 CC module/event/subsystems/keyring/keyring.o 00:03:36.701 CC module/event/subsystems/vmd/vmd.o 00:03:36.701 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:36.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.701 CC module/event/subsystems/sock/sock.o 00:03:36.701 LIB libspdk_event_keyring.a 00:03:36.701 LIB libspdk_event_vhost_blk.a 00:03:36.701 LIB libspdk_event_sock.a 00:03:36.701 SO libspdk_event_keyring.so.1.0 00:03:36.701 LIB libspdk_event_scheduler.a 00:03:36.701 LIB libspdk_event_iobuf.a 00:03:36.701 SO libspdk_event_vhost_blk.so.3.0 00:03:36.701 SO libspdk_event_sock.so.5.0 00:03:36.701 SO libspdk_event_scheduler.so.4.0 00:03:36.701 LIB libspdk_event_vfu_tgt.a 00:03:36.701 SO libspdk_event_iobuf.so.3.0 00:03:36.701 LIB libspdk_event_vmd.a 00:03:36.701 SYMLINK libspdk_event_keyring.so 00:03:36.701 SYMLINK libspdk_event_vhost_blk.so 00:03:36.701 SO libspdk_event_vfu_tgt.so.3.0 00:03:36.701 SYMLINK libspdk_event_sock.so 00:03:36.958 SO libspdk_event_vmd.so.6.0 00:03:36.958 SYMLINK libspdk_event_scheduler.so 00:03:36.958 SYMLINK libspdk_event_iobuf.so 00:03:36.958 SYMLINK libspdk_event_vfu_tgt.so 00:03:36.958 SYMLINK libspdk_event_vmd.so 00:03:37.216 CC module/event/subsystems/accel/accel.o 00:03:37.216 LIB libspdk_event_accel.a 00:03:37.216 SO libspdk_event_accel.so.6.0 00:03:37.496 SYMLINK libspdk_event_accel.so 00:03:37.754 CC module/event/subsystems/bdev/bdev.o 00:03:38.011 LIB libspdk_event_bdev.a 00:03:38.011 SO libspdk_event_bdev.so.6.0 00:03:38.011 SYMLINK libspdk_event_bdev.so 00:03:38.269 CC module/event/subsystems/ublk/ublk.o 00:03:38.269 CC module/event/subsystems/nbd/nbd.o 00:03:38.269 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.269 CC module/event/subsystems/scsi/scsi.o 00:03:38.269 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.269 LIB libspdk_event_ublk.a 00:03:38.269 LIB libspdk_event_nbd.a 00:03:38.527 SO libspdk_event_ublk.so.3.0 00:03:38.527 SO libspdk_event_nbd.so.6.0 00:03:38.527 LIB libspdk_event_scsi.a 00:03:38.527 SO libspdk_event_scsi.so.6.0 00:03:38.527 SYMLINK libspdk_event_ublk.so 00:03:38.527 SYMLINK libspdk_event_nbd.so 00:03:38.527 LIB libspdk_event_nvmf.a 00:03:38.527 SYMLINK libspdk_event_scsi.so 00:03:38.527 SO libspdk_event_nvmf.so.6.0 00:03:38.527 SYMLINK libspdk_event_nvmf.so 00:03:38.785 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.785 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.043 LIB libspdk_event_vhost_scsi.a 00:03:39.043 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.043 LIB libspdk_event_iscsi.a 00:03:39.043 SO libspdk_event_iscsi.so.6.0 00:03:39.043 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.043 SYMLINK libspdk_event_iscsi.so 00:03:39.301 SO libspdk.so.6.0 00:03:39.301 SYMLINK libspdk.so 00:03:39.560 TEST_HEADER include/spdk/accel.h 00:03:39.560 TEST_HEADER include/spdk/accel_module.h 00:03:39.560 TEST_HEADER include/spdk/assert.h 00:03:39.560 TEST_HEADER include/spdk/barrier.h 00:03:39.560 CXX app/trace/trace.o 00:03:39.560 TEST_HEADER include/spdk/base64.h 00:03:39.560 TEST_HEADER include/spdk/bdev.h 00:03:39.560 TEST_HEADER include/spdk/bdev_module.h 00:03:39.560 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.560 CC app/trace_record/trace_record.o 00:03:39.560 TEST_HEADER include/spdk/bit_array.h 00:03:39.560 TEST_HEADER include/spdk/bit_pool.h 00:03:39.561 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.561 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.561 TEST_HEADER include/spdk/blobfs.h 00:03:39.561 TEST_HEADER include/spdk/blob.h 00:03:39.561 TEST_HEADER include/spdk/conf.h 00:03:39.561 TEST_HEADER include/spdk/config.h 00:03:39.561 TEST_HEADER include/spdk/cpuset.h 00:03:39.561 TEST_HEADER include/spdk/crc16.h 00:03:39.561 TEST_HEADER include/spdk/crc32.h 00:03:39.561 TEST_HEADER include/spdk/crc64.h 00:03:39.561 TEST_HEADER include/spdk/dif.h 00:03:39.561 TEST_HEADER include/spdk/dma.h 00:03:39.561 TEST_HEADER include/spdk/endian.h 00:03:39.561 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.561 TEST_HEADER include/spdk/env.h 00:03:39.561 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.561 TEST_HEADER include/spdk/event.h 00:03:39.561 TEST_HEADER include/spdk/fd_group.h 00:03:39.561 TEST_HEADER include/spdk/fd.h 00:03:39.561 TEST_HEADER include/spdk/file.h 00:03:39.561 TEST_HEADER include/spdk/ftl.h 00:03:39.561 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.561 TEST_HEADER include/spdk/hexlify.h 00:03:39.561 TEST_HEADER include/spdk/histogram_data.h 00:03:39.561 TEST_HEADER include/spdk/idxd.h 00:03:39.561 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.561 TEST_HEADER include/spdk/init.h 00:03:39.561 TEST_HEADER include/spdk/ioat.h 00:03:39.561 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.561 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.561 TEST_HEADER include/spdk/json.h 00:03:39.561 CC examples/ioat/perf/perf.o 00:03:39.561 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.561 TEST_HEADER include/spdk/keyring.h 00:03:39.561 TEST_HEADER include/spdk/keyring_module.h 00:03:39.561 TEST_HEADER include/spdk/likely.h 00:03:39.561 TEST_HEADER include/spdk/log.h 00:03:39.561 TEST_HEADER include/spdk/lvol.h 00:03:39.561 CC examples/util/zipf/zipf.o 00:03:39.561 TEST_HEADER include/spdk/memory.h 00:03:39.561 TEST_HEADER include/spdk/mmio.h 00:03:39.561 TEST_HEADER include/spdk/nbd.h 00:03:39.561 TEST_HEADER include/spdk/notify.h 00:03:39.561 CC test/thread/poller_perf/poller_perf.o 00:03:39.561 TEST_HEADER include/spdk/nvme.h 00:03:39.561 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.561 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.561 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.561 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.561 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.561 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.561 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.561 TEST_HEADER include/spdk/nvmf.h 00:03:39.561 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.561 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.561 TEST_HEADER include/spdk/opal.h 00:03:39.561 TEST_HEADER include/spdk/opal_spec.h 00:03:39.561 TEST_HEADER include/spdk/pci_ids.h 00:03:39.561 TEST_HEADER include/spdk/pipe.h 00:03:39.561 TEST_HEADER include/spdk/queue.h 00:03:39.561 CC test/dma/test_dma/test_dma.o 00:03:39.561 TEST_HEADER include/spdk/reduce.h 00:03:39.561 TEST_HEADER include/spdk/rpc.h 00:03:39.561 TEST_HEADER include/spdk/scheduler.h 00:03:39.561 TEST_HEADER include/spdk/scsi.h 00:03:39.561 CC test/app/bdev_svc/bdev_svc.o 00:03:39.561 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.561 TEST_HEADER include/spdk/sock.h 00:03:39.561 TEST_HEADER include/spdk/stdinc.h 00:03:39.561 TEST_HEADER include/spdk/string.h 00:03:39.561 TEST_HEADER include/spdk/thread.h 00:03:39.819 TEST_HEADER include/spdk/trace.h 00:03:39.819 TEST_HEADER include/spdk/trace_parser.h 00:03:39.819 TEST_HEADER include/spdk/tree.h 00:03:39.819 TEST_HEADER include/spdk/ublk.h 00:03:39.819 TEST_HEADER include/spdk/util.h 00:03:39.819 TEST_HEADER include/spdk/uuid.h 00:03:39.819 TEST_HEADER include/spdk/version.h 00:03:39.819 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.819 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.819 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.819 TEST_HEADER include/spdk/vhost.h 00:03:39.819 TEST_HEADER include/spdk/vmd.h 00:03:39.819 TEST_HEADER include/spdk/xor.h 00:03:39.819 TEST_HEADER include/spdk/zipf.h 00:03:39.819 CXX test/cpp_headers/accel.o 00:03:39.819 LINK interrupt_tgt 00:03:39.819 LINK spdk_trace_record 00:03:39.819 LINK zipf 00:03:39.819 LINK poller_perf 00:03:39.819 LINK ioat_perf 00:03:39.819 LINK bdev_svc 00:03:39.819 CXX test/cpp_headers/accel_module.o 00:03:40.077 CXX test/cpp_headers/assert.o 00:03:40.077 LINK spdk_trace 00:03:40.077 CC test/app/histogram_perf/histogram_perf.o 00:03:40.077 CC examples/ioat/verify/verify.o 00:03:40.077 LINK test_dma 00:03:40.077 CC app/nvmf_tgt/nvmf_main.o 00:03:40.077 CXX test/cpp_headers/barrier.o 00:03:40.335 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.335 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.335 LINK histogram_perf 00:03:40.335 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.335 LINK mem_callbacks 00:03:40.335 CC test/env/vtophys/vtophys.o 00:03:40.335 LINK nvmf_tgt 00:03:40.335 CXX test/cpp_headers/base64.o 00:03:40.335 LINK verify 00:03:40.335 CXX test/cpp_headers/bdev.o 00:03:40.594 LINK iscsi_tgt 00:03:40.594 LINK vtophys 00:03:40.594 CC test/rpc_client/rpc_client_test.o 00:03:40.594 CXX test/cpp_headers/bdev_module.o 00:03:40.594 CXX test/cpp_headers/bdev_zone.o 00:03:40.852 CC test/accel/dif/dif.o 00:03:40.852 LINK nvme_fuzz 00:03:40.852 CC examples/thread/thread/thread_ex.o 00:03:40.852 CC test/blobfs/mkfs/mkfs.o 00:03:40.852 LINK rpc_client_test 00:03:40.852 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.852 CXX test/cpp_headers/bit_array.o 00:03:40.852 CXX test/cpp_headers/bit_pool.o 00:03:41.111 CC app/spdk_tgt/spdk_tgt.o 00:03:41.111 CXX test/cpp_headers/blob_bdev.o 00:03:41.111 LINK env_dpdk_post_init 00:03:41.111 CC test/event/event_perf/event_perf.o 00:03:41.111 LINK mkfs 00:03:41.111 LINK thread 00:03:41.111 CC test/event/reactor/reactor.o 00:03:41.111 LINK event_perf 00:03:41.111 CC test/event/reactor_perf/reactor_perf.o 00:03:41.111 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.370 LINK spdk_tgt 00:03:41.370 CC test/env/memory/memory_ut.o 00:03:41.370 LINK reactor 00:03:41.370 LINK dif 00:03:41.370 CXX test/cpp_headers/blobfs.o 00:03:41.370 LINK reactor_perf 00:03:41.370 CC test/env/pci/pci_ut.o 00:03:41.628 CC app/spdk_lspci/spdk_lspci.o 00:03:41.628 CC examples/sock/hello_world/hello_sock.o 00:03:41.628 CXX test/cpp_headers/blob.o 00:03:41.629 CC test/event/app_repeat/app_repeat.o 00:03:41.629 CC app/spdk_nvme_perf/perf.o 00:03:41.629 CC app/spdk_nvme_identify/identify.o 00:03:41.629 LINK spdk_lspci 00:03:41.629 CC app/spdk_nvme_discover/discovery_aer.o 00:03:41.629 CXX test/cpp_headers/conf.o 00:03:41.887 LINK app_repeat 00:03:41.887 LINK pci_ut 00:03:41.887 LINK hello_sock 00:03:41.887 CXX test/cpp_headers/config.o 00:03:41.887 LINK spdk_nvme_discover 00:03:41.887 CXX test/cpp_headers/cpuset.o 00:03:41.887 CC test/event/scheduler/scheduler.o 00:03:42.145 CXX test/cpp_headers/crc16.o 00:03:42.145 CC examples/vmd/lsvmd/lsvmd.o 00:03:42.145 CC test/lvol/esnap/esnap.o 00:03:42.145 LINK scheduler 00:03:42.402 CC test/nvme/aer/aer.o 00:03:42.402 CXX test/cpp_headers/crc32.o 00:03:42.402 CC examples/idxd/perf/perf.o 00:03:42.402 LINK lsvmd 00:03:42.402 CXX test/cpp_headers/crc64.o 00:03:42.402 LINK iscsi_fuzz 00:03:42.660 LINK memory_ut 00:03:42.660 CC examples/vmd/led/led.o 00:03:42.660 LINK aer 00:03:42.660 CXX test/cpp_headers/dif.o 00:03:42.660 CC examples/accel/perf/accel_perf.o 00:03:42.660 LINK spdk_nvme_identify 00:03:42.660 LINK spdk_nvme_perf 00:03:42.918 LINK idxd_perf 00:03:42.918 LINK led 00:03:42.918 CXX test/cpp_headers/dma.o 00:03:42.918 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.918 CC test/nvme/reset/reset.o 00:03:42.918 CC app/spdk_top/spdk_top.o 00:03:43.176 CXX test/cpp_headers/endian.o 00:03:43.176 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.176 CC test/bdev/bdevio/bdevio.o 00:03:43.176 CC app/vhost/vhost.o 00:03:43.176 CC app/spdk_dd/spdk_dd.o 00:03:43.176 CC examples/blob/hello_world/hello_blob.o 00:03:43.176 CXX test/cpp_headers/env_dpdk.o 00:03:43.176 LINK reset 00:03:43.434 LINK accel_perf 00:03:43.434 LINK vhost 00:03:43.434 CXX test/cpp_headers/env.o 00:03:43.434 LINK hello_blob 00:03:43.434 CC test/nvme/sgl/sgl.o 00:03:43.434 LINK vhost_fuzz 00:03:43.692 LINK bdevio 00:03:43.692 CXX test/cpp_headers/event.o 00:03:43.692 CC test/app/jsoncat/jsoncat.o 00:03:43.692 LINK spdk_dd 00:03:43.692 CXX test/cpp_headers/fd_group.o 00:03:43.692 CC examples/nvme/hello_world/hello_world.o 00:03:43.692 LINK jsoncat 00:03:43.965 CC examples/blob/cli/blobcli.o 00:03:43.965 LINK sgl 00:03:43.966 CXX test/cpp_headers/fd.o 00:03:43.966 CXX test/cpp_headers/file.o 00:03:43.966 CC test/nvme/e2edp/nvme_dp.o 00:03:43.966 CC test/nvme/overhead/overhead.o 00:03:43.966 LINK hello_world 00:03:43.966 CXX test/cpp_headers/ftl.o 00:03:43.966 CC test/app/stub/stub.o 00:03:44.234 LINK spdk_top 00:03:44.234 CC test/nvme/err_injection/err_injection.o 00:03:44.235 LINK nvme_dp 00:03:44.235 CC examples/nvme/reconnect/reconnect.o 00:03:44.235 LINK stub 00:03:44.235 LINK overhead 00:03:44.235 CXX test/cpp_headers/gpt_spec.o 00:03:44.235 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.492 LINK err_injection 00:03:44.492 LINK blobcli 00:03:44.492 CXX test/cpp_headers/hexlify.o 00:03:44.492 CC test/nvme/startup/startup.o 00:03:44.492 CC app/fio/nvme/fio_plugin.o 00:03:44.492 CC test/nvme/reserve/reserve.o 00:03:44.492 CC app/fio/bdev/fio_plugin.o 00:03:44.492 LINK hello_bdev 00:03:44.751 CXX test/cpp_headers/histogram_data.o 00:03:44.751 CC test/nvme/simple_copy/simple_copy.o 00:03:44.751 LINK startup 00:03:44.751 LINK reconnect 00:03:44.751 LINK reserve 00:03:44.751 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.751 CXX test/cpp_headers/idxd.o 00:03:44.751 CXX test/cpp_headers/idxd_spec.o 00:03:45.009 LINK simple_copy 00:03:45.009 CC test/nvme/connect_stress/connect_stress.o 00:03:45.009 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.009 CXX test/cpp_headers/init.o 00:03:45.009 CC test/nvme/boot_partition/boot_partition.o 00:03:45.009 CC examples/nvme/arbitration/arbitration.o 00:03:45.267 LINK spdk_bdev 00:03:45.267 LINK connect_stress 00:03:45.267 CXX test/cpp_headers/ioat.o 00:03:45.267 CC examples/nvme/hotplug/hotplug.o 00:03:45.267 LINK spdk_nvme 00:03:45.267 LINK boot_partition 00:03:45.267 CXX test/cpp_headers/ioat_spec.o 00:03:45.524 CC test/nvme/compliance/nvme_compliance.o 00:03:45.524 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.524 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.524 CXX test/cpp_headers/iscsi_spec.o 00:03:45.524 LINK arbitration 00:03:45.524 LINK hotplug 00:03:45.524 CC test/nvme/fdp/fdp.o 00:03:45.524 LINK nvme_manage 00:03:45.782 CXX test/cpp_headers/json.o 00:03:45.782 LINK fused_ordering 00:03:45.782 LINK doorbell_aers 00:03:45.782 CC test/nvme/cuse/cuse.o 00:03:45.782 LINK bdevperf 00:03:45.782 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.782 CXX test/cpp_headers/jsonrpc.o 00:03:45.782 CXX test/cpp_headers/keyring.o 00:03:45.782 LINK nvme_compliance 00:03:45.782 CXX test/cpp_headers/keyring_module.o 00:03:46.040 LINK fdp 00:03:46.040 CC examples/nvme/abort/abort.o 00:03:46.040 CXX test/cpp_headers/likely.o 00:03:46.040 LINK cmb_copy 00:03:46.040 CXX test/cpp_headers/log.o 00:03:46.040 CXX test/cpp_headers/lvol.o 00:03:46.040 CXX test/cpp_headers/memory.o 00:03:46.040 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.040 CXX test/cpp_headers/mmio.o 00:03:46.040 CXX test/cpp_headers/nbd.o 00:03:46.040 CXX test/cpp_headers/notify.o 00:03:46.298 CXX test/cpp_headers/nvme.o 00:03:46.298 CXX test/cpp_headers/nvme_intel.o 00:03:46.298 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.298 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.298 CXX test/cpp_headers/nvme_spec.o 00:03:46.298 LINK pmr_persistence 00:03:46.298 CXX test/cpp_headers/nvme_zns.o 00:03:46.298 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.298 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.298 CXX test/cpp_headers/nvmf.o 00:03:46.554 LINK abort 00:03:46.554 CXX test/cpp_headers/nvmf_spec.o 00:03:46.554 CXX test/cpp_headers/nvmf_transport.o 00:03:46.554 CXX test/cpp_headers/opal.o 00:03:46.554 CXX test/cpp_headers/opal_spec.o 00:03:46.554 CXX test/cpp_headers/pci_ids.o 00:03:46.554 CXX test/cpp_headers/pipe.o 00:03:46.554 CXX test/cpp_headers/queue.o 00:03:46.554 CXX test/cpp_headers/reduce.o 00:03:46.554 CXX test/cpp_headers/rpc.o 00:03:46.811 CXX test/cpp_headers/scheduler.o 00:03:46.811 CXX test/cpp_headers/scsi.o 00:03:46.811 CXX test/cpp_headers/scsi_spec.o 00:03:46.811 CXX test/cpp_headers/sock.o 00:03:46.811 CXX test/cpp_headers/stdinc.o 00:03:46.811 CXX test/cpp_headers/string.o 00:03:46.811 CXX test/cpp_headers/thread.o 00:03:46.811 CC examples/nvmf/nvmf/nvmf.o 00:03:46.811 CXX test/cpp_headers/trace.o 00:03:46.811 CXX test/cpp_headers/trace_parser.o 00:03:46.811 CXX test/cpp_headers/tree.o 00:03:47.068 CXX test/cpp_headers/ublk.o 00:03:47.068 CXX test/cpp_headers/util.o 00:03:47.068 CXX test/cpp_headers/uuid.o 00:03:47.068 CXX test/cpp_headers/version.o 00:03:47.068 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.068 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.068 CXX test/cpp_headers/vhost.o 00:03:47.068 CXX test/cpp_headers/vmd.o 00:03:47.068 CXX test/cpp_headers/xor.o 00:03:47.068 CXX test/cpp_headers/zipf.o 00:03:47.327 LINK nvmf 00:03:47.327 LINK cuse 00:03:49.227 LINK esnap 00:03:49.795 00:03:49.795 real 1m15.290s 00:03:49.795 user 7m30.260s 00:03:49.795 sys 1m32.004s 00:03:49.795 02:52:56 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:49.795 02:52:56 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.795 ************************************ 00:03:49.795 END TEST make 00:03:49.795 ************************************ 00:03:49.795 02:52:56 -- common/autotest_common.sh@1142 -- $ return 0 00:03:49.795 02:52:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.795 02:52:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.795 02:52:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.795 02:52:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.795 02:52:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.795 02:52:56 -- pm/common@44 -- $ pid=5190 00:03:49.795 02:52:56 -- pm/common@50 -- $ kill -TERM 5190 00:03:49.795 02:52:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.795 02:52:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.795 02:52:56 -- pm/common@44 -- $ pid=5192 00:03:49.795 02:52:56 -- pm/common@50 -- $ kill -TERM 5192 00:03:49.795 02:52:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.795 02:52:56 -- nvmf/common.sh@7 -- # uname -s 00:03:49.795 02:52:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.795 02:52:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.795 02:52:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.795 02:52:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.795 02:52:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.795 02:52:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.795 02:52:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.795 02:52:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.795 02:52:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.795 02:52:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.795 02:52:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:03:49.795 02:52:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:03:49.795 02:52:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.795 02:52:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.795 02:52:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:49.795 02:52:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.795 02:52:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.795 02:52:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.795 02:52:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.795 02:52:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.795 02:52:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.795 02:52:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.795 02:52:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.795 02:52:56 -- paths/export.sh@5 -- # export PATH 00:03:49.796 02:52:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.796 02:52:56 -- nvmf/common.sh@47 -- # : 0 00:03:49.796 02:52:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:49.796 02:52:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:49.796 02:52:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.796 02:52:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.796 02:52:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.796 02:52:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:49.796 02:52:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:49.796 02:52:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:49.796 02:52:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.796 02:52:56 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.796 02:52:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.796 02:52:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.796 02:52:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.796 02:52:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.796 02:52:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.796 02:52:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.053 02:52:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.053 02:52:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.053 02:52:56 -- spdk/autotest.sh@48 -- # udevadm_pid=53462 00:03:50.053 02:52:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:50.053 02:52:56 -- pm/common@17 -- # local monitor 00:03:50.053 02:52:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.053 02:52:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.053 02:52:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.053 02:52:56 -- pm/common@21 -- # date +%s 00:03:50.053 02:52:56 -- pm/common@25 -- # sleep 1 00:03:50.053 02:52:56 -- pm/common@21 -- # date +%s 00:03:50.053 02:52:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720839176 00:03:50.053 02:52:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720839176 00:03:50.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720839176_collect-cpu-load.pm.log 00:03:50.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720839176_collect-vmstat.pm.log 00:03:50.989 02:52:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.989 02:52:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.989 02:52:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.989 02:52:57 -- common/autotest_common.sh@10 -- # set +x 00:03:50.990 02:52:57 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.990 02:52:57 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:50.990 02:52:57 -- common/autotest_common.sh@10 -- # set +x 00:03:50.990 02:52:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.990 02:52:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.990 02:52:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.990 02:52:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.990 02:52:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.990 02:52:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.990 02:52:57 -- common/autotest_common.sh@1455 -- # uname 00:03:50.990 02:52:57 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:50.990 02:52:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.990 02:52:57 -- common/autotest_common.sh@1475 -- # uname 00:03:50.990 02:52:57 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:50.990 02:52:57 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:50.990 02:52:57 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:50.990 02:52:57 -- spdk/autotest.sh@72 -- # hash lcov 00:03:50.990 02:52:57 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:50.990 02:52:57 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:50.990 --rc lcov_branch_coverage=1 00:03:50.990 --rc lcov_function_coverage=1 00:03:50.990 --rc genhtml_branch_coverage=1 00:03:50.990 --rc genhtml_function_coverage=1 00:03:50.990 --rc genhtml_legend=1 00:03:50.990 --rc geninfo_all_blocks=1 00:03:50.990 ' 00:03:50.990 02:52:57 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:50.990 --rc lcov_branch_coverage=1 00:03:50.990 --rc lcov_function_coverage=1 00:03:50.990 --rc genhtml_branch_coverage=1 00:03:50.990 --rc genhtml_function_coverage=1 00:03:50.990 --rc genhtml_legend=1 00:03:50.990 --rc geninfo_all_blocks=1 00:03:50.990 ' 00:03:50.990 02:52:57 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:50.990 --rc lcov_branch_coverage=1 00:03:50.990 --rc lcov_function_coverage=1 00:03:50.990 --rc genhtml_branch_coverage=1 00:03:50.990 --rc genhtml_function_coverage=1 00:03:50.990 --rc genhtml_legend=1 00:03:50.990 --rc geninfo_all_blocks=1 00:03:50.990 --no-external' 00:03:50.990 02:52:57 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:50.990 --rc lcov_branch_coverage=1 00:03:50.990 --rc lcov_function_coverage=1 00:03:50.990 --rc genhtml_branch_coverage=1 00:03:50.990 --rc genhtml_function_coverage=1 00:03:50.990 --rc genhtml_legend=1 00:03:50.990 --rc geninfo_all_blocks=1 00:03:50.990 --no-external' 00:03:50.990 02:52:57 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.990 lcov: LCOV version 1.14 00:03:50.990 02:52:57 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:05.867 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.867 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:18.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:18.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:18.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:18.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:18.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:18.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:18.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:18.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:18.075 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:18.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:19.975 02:53:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:19.975 02:53:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.975 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:04:19.975 02:53:26 -- spdk/autotest.sh@91 -- # rm -f 00:04:19.975 02:53:26 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.543 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:20.543 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:20.543 02:53:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:20.543 02:53:26 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.543 02:53:26 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.543 02:53:26 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.543 02:53:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.543 02:53:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.543 02:53:26 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.543 02:53:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.543 02:53:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:20.543 02:53:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:20.543 02:53:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.543 02:53:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:20.543 02:53:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:20.543 02:53:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.543 02:53:26 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:20.543 02:53:26 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:20.543 02:53:26 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:20.543 02:53:26 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.543 02:53:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:20.543 02:53:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.543 02:53:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.543 02:53:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:20.543 02:53:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:20.543 02:53:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.543 No valid GPT data, bailing 00:04:20.543 02:53:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.543 02:53:26 -- scripts/common.sh@391 -- # pt= 00:04:20.543 02:53:26 -- scripts/common.sh@392 -- # return 1 00:04:20.543 02:53:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.543 1+0 records in 00:04:20.543 1+0 records out 00:04:20.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408071 s, 257 MB/s 00:04:20.543 02:53:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.543 02:53:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.543 02:53:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:20.543 02:53:26 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:20.543 02:53:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:20.543 No valid GPT data, bailing 00:04:20.543 02:53:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:20.543 02:53:26 -- scripts/common.sh@391 -- # pt= 00:04:20.543 02:53:26 -- scripts/common.sh@392 -- # return 1 00:04:20.543 02:53:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:20.543 1+0 records in 00:04:20.543 1+0 records out 00:04:20.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399926 s, 262 MB/s 00:04:20.543 02:53:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.543 02:53:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.543 02:53:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:20.543 02:53:26 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:20.543 02:53:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:20.543 No valid GPT data, bailing 00:04:20.802 02:53:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:20.802 02:53:27 -- scripts/common.sh@391 -- # pt= 00:04:20.802 02:53:27 -- scripts/common.sh@392 -- # return 1 00:04:20.802 02:53:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:20.802 1+0 records in 00:04:20.802 1+0 records out 00:04:20.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405747 s, 258 MB/s 00:04:20.802 02:53:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.802 02:53:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.802 02:53:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:20.802 02:53:27 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:20.802 02:53:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:20.802 No valid GPT data, bailing 00:04:20.802 02:53:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:20.802 02:53:27 -- scripts/common.sh@391 -- # pt= 00:04:20.802 02:53:27 -- scripts/common.sh@392 -- # return 1 00:04:20.802 02:53:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:20.802 1+0 records in 00:04:20.802 1+0 records out 00:04:20.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0034514 s, 304 MB/s 00:04:20.802 02:53:27 -- spdk/autotest.sh@118 -- # sync 00:04:20.802 02:53:27 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.802 02:53:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.802 02:53:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:22.704 02:53:28 -- spdk/autotest.sh@124 -- # uname -s 00:04:22.704 02:53:28 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:22.704 02:53:28 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:22.704 02:53:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.704 02:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.704 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:04:22.704 ************************************ 00:04:22.704 START TEST setup.sh 00:04:22.704 ************************************ 00:04:22.704 02:53:28 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:22.704 * Looking for test storage... 00:04:22.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.704 02:53:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:22.704 02:53:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:22.704 02:53:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:22.704 02:53:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.704 02:53:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.704 02:53:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.704 ************************************ 00:04:22.704 START TEST acl 00:04:22.704 ************************************ 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:22.704 * Looking for test storage... 00:04:22.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:22.704 02:53:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:22.704 02:53:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:22.704 02:53:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.704 02:53:29 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.641 02:53:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:23.641 02:53:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:23.641 02:53:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:23.641 02:53:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:23.641 02:53:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.641 02:53:29 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.210 Hugepages 00:04:24.210 node hugesize free / total 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.210 00:04:24.210 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:24.210 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:24.469 02:53:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:24.469 02:53:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.469 02:53:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.469 02:53:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:24.469 ************************************ 00:04:24.469 START TEST denied 00:04:24.469 ************************************ 00:04:24.469 02:53:30 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:24.469 02:53:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:24.469 02:53:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:24.469 02:53:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:24.469 02:53:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.469 02:53:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.402 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.402 02:53:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.969 00:04:25.969 real 0m1.423s 00:04:25.969 user 0m0.554s 00:04:25.969 sys 0m0.813s 00:04:25.969 02:53:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.969 ************************************ 00:04:25.969 END TEST denied 00:04:25.969 ************************************ 00:04:25.969 02:53:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 02:53:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.969 02:53:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.969 02:53:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.969 02:53:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.969 02:53:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 ************************************ 00:04:25.969 START TEST allowed 00:04:25.969 ************************************ 00:04:25.969 02:53:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:25.969 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:25.969 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.969 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:25.969 02:53:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.969 02:53:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.535 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.535 02:53:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.470 00:04:27.470 real 0m1.481s 00:04:27.470 user 0m0.657s 00:04:27.470 sys 0m0.818s 00:04:27.470 02:53:33 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.470 02:53:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:27.470 ************************************ 00:04:27.470 END TEST allowed 00:04:27.470 ************************************ 00:04:27.470 02:53:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:27.470 00:04:27.470 real 0m4.672s 00:04:27.470 user 0m2.062s 00:04:27.470 sys 0m2.550s 00:04:27.470 02:53:33 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.470 02:53:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.470 ************************************ 00:04:27.470 END TEST acl 00:04:27.470 ************************************ 00:04:27.470 02:53:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:27.470 02:53:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.470 02:53:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.470 02:53:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.471 02:53:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.471 ************************************ 00:04:27.471 START TEST hugepages 00:04:27.471 ************************************ 00:04:27.471 02:53:33 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:27.471 * Looking for test storage... 00:04:27.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5810840 kB' 'MemAvailable: 7401632 kB' 'Buffers: 2436 kB' 'Cached: 1804736 kB' 'SwapCached: 0 kB' 'Active: 435448 kB' 'Inactive: 1476616 kB' 'Active(anon): 115380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106264 kB' 'Mapped: 48468 kB' 'Shmem: 10488 kB' 'KReclaimable: 62540 kB' 'Slab: 138064 kB' 'SReclaimable: 62540 kB' 'SUnreclaim: 75524 kB' 'KernelStack: 6396 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 345836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.471 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.472 02:53:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:27.472 02:53:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.472 02:53:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.472 02:53:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.472 ************************************ 00:04:27.472 START TEST default_setup 00:04:27.472 ************************************ 00:04:27.472 02:53:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:27.472 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:27.472 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.473 02:53:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.413 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7894076 kB' 'MemAvailable: 9484720 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 452216 kB' 'Inactive: 1476628 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137792 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75572 kB' 'KernelStack: 6432 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.413 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.414 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7894076 kB' 'MemAvailable: 9484720 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 452220 kB' 'Inactive: 1476628 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137792 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75572 kB' 'KernelStack: 6416 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.415 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7894468 kB' 'MemAvailable: 9485116 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451708 kB' 'Inactive: 1476632 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137788 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75568 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.416 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.417 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:28.418 nr_hugepages=1024 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.418 resv_hugepages=0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.418 surplus_hugepages=0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.418 anon_hugepages=0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7894468 kB' 'MemAvailable: 9485116 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451704 kB' 'Inactive: 1476632 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137784 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75564 kB' 'KernelStack: 6384 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.418 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.419 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7894468 kB' 'MemUsed: 4347504 kB' 'SwapCached: 0 kB' 'Active: 451684 kB' 'Inactive: 1476632 kB' 'Active(anon): 131616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1807164 kB' 'Mapped: 48536 kB' 'AnonPages: 122748 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62220 kB' 'Slab: 137784 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.420 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.680 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.681 node0=1024 expecting 1024 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.681 00:04:28.681 real 0m1.007s 00:04:28.681 user 0m0.472s 00:04:28.681 sys 0m0.474s 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.681 ************************************ 00:04:28.681 END TEST default_setup 00:04:28.681 ************************************ 00:04:28.681 02:53:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:28.681 02:53:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.681 02:53:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:28.681 02:53:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.681 02:53:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.681 02:53:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.681 ************************************ 00:04:28.681 START TEST per_node_1G_alloc 00:04:28.681 ************************************ 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.681 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.682 02:53:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.944 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.944 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8946780 kB' 'MemAvailable: 10537428 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 452440 kB' 'Inactive: 1476632 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123484 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137820 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75600 kB' 'KernelStack: 6372 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.944 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.945 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8946780 kB' 'MemAvailable: 10537428 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 452252 kB' 'Inactive: 1476632 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137840 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75620 kB' 'KernelStack: 6372 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.946 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8946780 kB' 'MemAvailable: 10537428 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451748 kB' 'Inactive: 1476632 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137836 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75616 kB' 'KernelStack: 6400 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.947 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:28.948 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.230 nr_hugepages=512 00:04:29.230 resv_hugepages=0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.230 surplus_hugepages=0 00:04:29.230 anon_hugepages=0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8946780 kB' 'MemAvailable: 10537428 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451896 kB' 'Inactive: 1476632 kB' 'Active(anon): 131828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137836 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75616 kB' 'KernelStack: 6432 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.230 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.231 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8946780 kB' 'MemUsed: 3295192 kB' 'SwapCached: 0 kB' 'Active: 451760 kB' 'Inactive: 1476632 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1807164 kB' 'Mapped: 48536 kB' 'AnonPages: 122804 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62220 kB' 'Slab: 137836 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.232 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.233 node0=512 expecting 512 00:04:29.233 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.234 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.234 00:04:29.234 real 0m0.550s 00:04:29.234 user 0m0.271s 00:04:29.234 sys 0m0.291s 00:04:29.234 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.234 ************************************ 00:04:29.234 END TEST per_node_1G_alloc 00:04:29.234 ************************************ 00:04:29.234 02:53:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.234 02:53:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.234 02:53:35 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:29.234 02:53:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.234 02:53:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.234 02:53:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.234 ************************************ 00:04:29.234 START TEST even_2G_alloc 00:04:29.234 ************************************ 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.234 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.493 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.493 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902924 kB' 'MemAvailable: 9493572 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 452160 kB' 'Inactive: 1476632 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123244 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137856 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75636 kB' 'KernelStack: 6420 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.493 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.494 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.757 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902924 kB' 'MemAvailable: 9493572 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451832 kB' 'Inactive: 1476632 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137860 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75640 kB' 'KernelStack: 6448 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.758 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.759 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7903288 kB' 'MemAvailable: 9493936 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451552 kB' 'Inactive: 1476632 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137864 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75644 kB' 'KernelStack: 6416 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.760 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.761 nr_hugepages=1024 00:04:29.761 resv_hugepages=0 00:04:29.761 surplus_hugepages=0 00:04:29.761 anon_hugepages=0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7903636 kB' 'MemAvailable: 9494284 kB' 'Buffers: 2436 kB' 'Cached: 1804728 kB' 'SwapCached: 0 kB' 'Active: 451544 kB' 'Inactive: 1476632 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137864 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75644 kB' 'KernelStack: 6416 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.761 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.762 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7903636 kB' 'MemUsed: 4338336 kB' 'SwapCached: 0 kB' 'Active: 451752 kB' 'Inactive: 1476632 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1807164 kB' 'Mapped: 48532 kB' 'AnonPages: 122796 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62220 kB' 'Slab: 137856 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.763 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.764 node0=1024 expecting 1024 00:04:29.764 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.765 02:53:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.765 00:04:29.765 real 0m0.565s 00:04:29.765 user 0m0.286s 00:04:29.765 sys 0m0.282s 00:04:29.765 ************************************ 00:04:29.765 END TEST even_2G_alloc 00:04:29.765 ************************************ 00:04:29.765 02:53:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.765 02:53:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.765 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.765 02:53:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:29.765 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.765 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.765 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.765 ************************************ 00:04:29.765 START TEST odd_alloc 00:04:29.765 ************************************ 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.765 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.336 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.336 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908644 kB' 'MemAvailable: 9499296 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451920 kB' 'Inactive: 1476636 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137824 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75604 kB' 'KernelStack: 6420 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.336 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908428 kB' 'MemAvailable: 9499080 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451532 kB' 'Inactive: 1476636 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122688 kB' 'Mapped: 48472 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137840 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75620 kB' 'KernelStack: 6432 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.337 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.338 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908176 kB' 'MemAvailable: 9498828 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451928 kB' 'Inactive: 1476636 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48472 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137840 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75620 kB' 'KernelStack: 6448 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.339 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.340 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:30.341 nr_hugepages=1025 00:04:30.341 resv_hugepages=0 00:04:30.341 surplus_hugepages=0 00:04:30.341 anon_hugepages=0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908428 kB' 'MemAvailable: 9499080 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451692 kB' 'Inactive: 1476636 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123068 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137832 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75612 kB' 'KernelStack: 6384 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.341 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.342 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908428 kB' 'MemUsed: 4333544 kB' 'SwapCached: 0 kB' 'Active: 451532 kB' 'Inactive: 1476636 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1807168 kB' 'Mapped: 48532 kB' 'AnonPages: 122652 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62220 kB' 'Slab: 137832 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.343 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.344 node0=1025 expecting 1025 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:30.344 00:04:30.344 real 0m0.575s 00:04:30.344 user 0m0.269s 00:04:30.344 sys 0m0.306s 00:04:30.344 ************************************ 00:04:30.344 END TEST odd_alloc 00:04:30.344 ************************************ 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.344 02:53:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.344 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.344 02:53:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:30.344 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.344 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.344 02:53:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.344 ************************************ 00:04:30.344 START TEST custom_alloc 00:04:30.344 ************************************ 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.604 02:53:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.867 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.867 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8961836 kB' 'MemAvailable: 10552488 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451864 kB' 'Inactive: 1476636 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137796 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75576 kB' 'KernelStack: 6420 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.867 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8962152 kB' 'MemAvailable: 10552804 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451512 kB' 'Inactive: 1476636 kB' 'Active(anon): 131444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137836 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75616 kB' 'KernelStack: 6400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.868 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.869 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8962152 kB' 'MemAvailable: 10552804 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451544 kB' 'Inactive: 1476636 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122908 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137836 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75616 kB' 'KernelStack: 6416 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.870 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:30.871 nr_hugepages=512 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:30.871 resv_hugepages=0 00:04:30.871 surplus_hugepages=0 00:04:30.871 anon_hugepages=0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8962404 kB' 'MemAvailable: 10553056 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 451488 kB' 'Inactive: 1476636 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122816 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 62220 kB' 'Slab: 137828 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75608 kB' 'KernelStack: 6400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.871 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.872 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8962740 kB' 'MemUsed: 3279232 kB' 'SwapCached: 0 kB' 'Active: 451492 kB' 'Inactive: 1476636 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1807168 kB' 'Mapped: 48532 kB' 'AnonPages: 122816 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62220 kB' 'Slab: 137828 kB' 'SReclaimable: 62220 kB' 'SUnreclaim: 75608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.132 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.133 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.134 node0=512 expecting 512 00:04:31.134 ************************************ 00:04:31.134 END TEST custom_alloc 00:04:31.134 ************************************ 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:31.134 00:04:31.134 real 0m0.582s 00:04:31.134 user 0m0.275s 00:04:31.134 sys 0m0.301s 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.134 02:53:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.134 02:53:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:31.134 02:53:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:31.134 02:53:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.134 02:53:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.134 02:53:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.134 ************************************ 00:04:31.134 START TEST no_shrink_alloc 00:04:31.134 ************************************ 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.134 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.394 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.394 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922248 kB' 'MemAvailable: 9512896 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 446968 kB' 'Inactive: 1476636 kB' 'Active(anon): 126900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118008 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137676 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75464 kB' 'KernelStack: 6244 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.394 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.395 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.396 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922248 kB' 'MemAvailable: 9512896 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 446760 kB' 'Inactive: 1476636 kB' 'Active(anon): 126692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117808 kB' 'Mapped: 47848 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137620 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75408 kB' 'KernelStack: 6272 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.658 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.659 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922248 kB' 'MemAvailable: 9512896 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 447024 kB' 'Inactive: 1476636 kB' 'Active(anon): 126956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118064 kB' 'Mapped: 47848 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137620 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75408 kB' 'KernelStack: 6272 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.660 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.661 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.662 nr_hugepages=1024 00:04:31.662 resv_hugepages=0 00:04:31.662 surplus_hugepages=0 00:04:31.662 anon_hugepages=0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922248 kB' 'MemAvailable: 9512896 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 446688 kB' 'Inactive: 1476636 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118032 kB' 'Mapped: 47792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137612 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75400 kB' 'KernelStack: 6304 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.662 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.663 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922508 kB' 'MemUsed: 4319464 kB' 'SwapCached: 0 kB' 'Active: 446732 kB' 'Inactive: 1476636 kB' 'Active(anon): 126664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1807168 kB' 'Mapped: 47792 kB' 'AnonPages: 118040 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62212 kB' 'Slab: 137612 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.664 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.665 node0=1024 expecting 1024 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.665 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.924 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.924 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.924 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920560 kB' 'MemAvailable: 9511208 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 447256 kB' 'Inactive: 1476636 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118132 kB' 'Mapped: 47948 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137536 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75324 kB' 'KernelStack: 6292 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.924 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.185 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920792 kB' 'MemAvailable: 9511440 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 447080 kB' 'Inactive: 1476636 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118156 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137504 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75292 kB' 'KernelStack: 6288 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.186 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.187 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920792 kB' 'MemAvailable: 9511440 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 446708 kB' 'Inactive: 1476636 kB' 'Active(anon): 126640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117836 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137504 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75292 kB' 'KernelStack: 6304 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.188 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.189 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.190 nr_hugepages=1024 00:04:32.190 resv_hugepages=0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.190 surplus_hugepages=0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.190 anon_hugepages=0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920792 kB' 'MemAvailable: 9511440 kB' 'Buffers: 2436 kB' 'Cached: 1804732 kB' 'SwapCached: 0 kB' 'Active: 446640 kB' 'Inactive: 1476636 kB' 'Active(anon): 126572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118000 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 62212 kB' 'Slab: 137504 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75292 kB' 'KernelStack: 6288 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 6123520 kB' 'DirectMap1G: 8388608 kB' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.190 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.191 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7922688 kB' 'MemUsed: 4319284 kB' 'SwapCached: 0 kB' 'Active: 446900 kB' 'Inactive: 1476636 kB' 'Active(anon): 126832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1476636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1807168 kB' 'Mapped: 47796 kB' 'AnonPages: 117940 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62212 kB' 'Slab: 137504 kB' 'SReclaimable: 62212 kB' 'SUnreclaim: 75292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.192 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.193 node0=1024 expecting 1024 00:04:32.193 ************************************ 00:04:32.193 END TEST no_shrink_alloc 00:04:32.193 ************************************ 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.193 00:04:32.193 real 0m1.142s 00:04:32.193 user 0m0.572s 00:04:32.193 sys 0m0.577s 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.193 02:53:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.193 02:53:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.193 02:53:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.193 ************************************ 00:04:32.193 END TEST hugepages 00:04:32.193 ************************************ 00:04:32.193 00:04:32.193 real 0m4.877s 00:04:32.193 user 0m2.313s 00:04:32.193 sys 0m2.488s 00:04:32.193 02:53:38 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.193 02:53:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 02:53:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.452 02:53:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:32.452 02:53:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.452 02:53:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.452 02:53:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 ************************************ 00:04:32.452 START TEST driver 00:04:32.452 ************************************ 00:04:32.452 02:53:38 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:32.452 * Looking for test storage... 00:04:32.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:32.452 02:53:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:32.452 02:53:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.452 02:53:38 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.019 02:53:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:33.019 02:53:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.019 02:53:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.019 02:53:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.019 ************************************ 00:04:33.019 START TEST guess_driver 00:04:33.019 ************************************ 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:33.019 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:33.020 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:33.020 Looking for driver=uio_pci_generic 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.020 02:53:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.587 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:33.587 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:33.587 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.846 02:53:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.414 00:04:34.414 real 0m1.481s 00:04:34.414 user 0m0.526s 00:04:34.414 sys 0m0.925s 00:04:34.414 02:53:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.414 02:53:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.414 ************************************ 00:04:34.414 END TEST guess_driver 00:04:34.414 ************************************ 00:04:34.414 02:53:40 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:34.414 ************************************ 00:04:34.414 END TEST driver 00:04:34.414 ************************************ 00:04:34.414 00:04:34.414 real 0m2.163s 00:04:34.414 user 0m0.787s 00:04:34.414 sys 0m1.408s 00:04:34.414 02:53:40 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.414 02:53:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.414 02:53:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.414 02:53:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:34.414 02:53:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.414 02:53:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.414 02:53:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.672 ************************************ 00:04:34.672 START TEST devices 00:04:34.672 ************************************ 00:04:34.672 02:53:40 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:34.672 * Looking for test storage... 00:04:34.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.672 02:53:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:34.672 02:53:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:34.672 02:53:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.672 02:53:40 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:35.240 02:53:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:35.240 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.240 02:53:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:35.240 02:53:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.499 No valid GPT data, bailing 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:35.499 No valid GPT data, bailing 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:35.499 No valid GPT data, bailing 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:35.499 02:53:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:35.499 02:53:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:35.758 No valid GPT data, bailing 00:04:35.758 02:53:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:35.758 02:53:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.758 02:53:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:35.758 02:53:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:35.758 02:53:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:35.758 02:53:42 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.758 02:53:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.758 02:53:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.758 02:53:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.758 02:53:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.758 ************************************ 00:04:35.758 START TEST nvme_mount 00:04:35.758 ************************************ 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.758 02:53:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:36.695 Creating new GPT entries in memory. 00:04:36.695 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.695 other utilities. 00:04:36.695 02:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.695 02:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.695 02:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.695 02:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.695 02:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:37.632 Creating new GPT entries in memory. 00:04:37.632 The operation has completed successfully. 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57662 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:37.632 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.890 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.149 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.149 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.149 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.149 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.407 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.407 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.666 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:38.666 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:38.666 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.666 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.666 02:53:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.925 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.926 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.185 02:53:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.444 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.703 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.703 02:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.703 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.703 00:04:39.703 real 0m4.028s 00:04:39.703 user 0m0.717s 00:04:39.703 sys 0m1.049s 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.703 ************************************ 00:04:39.703 END TEST nvme_mount 00:04:39.703 ************************************ 00:04:39.703 02:53:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.703 02:53:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.703 02:53:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.703 02:53:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.703 02:53:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.703 02:53:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.703 ************************************ 00:04:39.703 START TEST dm_mount 00:04:39.703 ************************************ 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.703 02:53:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.081 Creating new GPT entries in memory. 00:04:41.081 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.081 other utilities. 00:04:41.081 02:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.081 02:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.081 02:53:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.081 02:53:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.081 02:53:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:42.017 Creating new GPT entries in memory. 00:04:42.017 The operation has completed successfully. 00:04:42.017 02:53:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.017 02:53:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.017 02:53:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.017 02:53:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.017 02:53:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:42.991 The operation has completed successfully. 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58095 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.991 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.250 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.509 02:53:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.767 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.768 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:44.027 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:44.027 00:04:44.027 real 0m4.250s 00:04:44.027 user 0m0.495s 00:04:44.027 sys 0m0.721s 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.027 ************************************ 00:04:44.027 END TEST dm_mount 00:04:44.027 02:53:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:44.027 ************************************ 00:04:44.027 02:53:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.027 02:53:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.285 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:44.285 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:44.285 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.285 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.285 02:53:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:44.285 00:04:44.285 real 0m9.799s 00:04:44.285 user 0m1.863s 00:04:44.285 sys 0m2.339s 00:04:44.285 02:53:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.285 02:53:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.285 ************************************ 00:04:44.285 END TEST devices 00:04:44.285 ************************************ 00:04:44.285 02:53:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:44.285 00:04:44.285 real 0m21.795s 00:04:44.285 user 0m7.112s 00:04:44.285 sys 0m8.966s 00:04:44.285 02:53:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.285 02:53:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.285 ************************************ 00:04:44.285 END TEST setup.sh 00:04:44.285 ************************************ 00:04:44.543 02:53:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.543 02:53:50 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:45.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.110 Hugepages 00:04:45.110 node hugesize free / total 00:04:45.110 node0 1048576kB 0 / 0 00:04:45.110 node0 2048kB 2048 / 2048 00:04:45.110 00:04:45.110 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.110 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.368 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:45.368 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:45.368 02:53:51 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.368 02:53:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.368 02:53:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.368 02:53:51 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.192 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.192 02:53:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:47.127 02:53:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:47.127 02:53:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:47.127 02:53:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.127 02:53:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:47.127 02:53:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:47.127 02:53:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:47.127 02:53:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.127 02:53:53 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:47.127 02:53:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:47.127 02:53:53 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:47.127 02:53:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:47.127 02:53:53 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.694 Waiting for block devices as requested 00:04:47.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.694 02:53:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:47.694 02:53:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:47.694 02:53:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:47.694 02:53:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:47.694 02:53:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:47.694 02:53:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:47.694 02:53:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:47.694 02:53:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:47.694 02:53:54 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:47.694 02:53:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:47.694 02:53:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:47.952 02:53:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:47.953 02:53:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:47.953 02:53:54 -- common/autotest_common.sh@1557 -- # continue 00:04:47.953 02:53:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:47.953 02:53:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:47.953 02:53:54 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:47.953 02:53:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:47.953 02:53:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:47.953 02:53:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:47.953 02:53:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:47.953 02:53:54 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:47.953 02:53:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:47.953 02:53:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:47.953 02:53:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:47.953 02:53:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:47.953 02:53:54 -- common/autotest_common.sh@1557 -- # continue 00:04:47.953 02:53:54 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:47.953 02:53:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.953 02:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.953 02:53:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:47.953 02:53:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.953 02:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:47.953 02:53:54 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.779 02:53:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:48.779 02:53:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.779 02:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.779 02:53:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:48.779 02:53:55 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:48.779 02:53:55 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.779 02:53:55 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:48.779 02:53:55 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:48.779 02:53:55 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:48.779 02:53:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.779 02:53:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.779 02:53:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.779 02:53:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:48.779 02:53:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.779 02:53:55 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:48.779 02:53:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:48.779 02:53:55 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:48.779 02:53:55 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:48.779 02:53:55 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:48.779 02:53:55 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.779 02:53:55 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:48.779 02:53:55 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:48.779 02:53:55 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:48.779 02:53:55 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.779 02:53:55 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:48.779 02:53:55 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:48.779 02:53:55 -- common/autotest_common.sh@1593 -- # return 0 00:04:48.779 02:53:55 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:48.779 02:53:55 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:48.779 02:53:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:48.779 02:53:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:48.779 02:53:55 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:48.779 02:53:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.779 02:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.779 02:53:55 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:48.779 02:53:55 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:48.779 02:53:55 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:48.779 02:53:55 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:48.779 02:53:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.779 02:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.779 02:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:48.779 ************************************ 00:04:48.779 START TEST env 00:04:48.779 ************************************ 00:04:48.779 02:53:55 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.038 * Looking for test storage... 00:04:49.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:49.038 02:53:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.038 02:53:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.038 02:53:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.038 02:53:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.038 ************************************ 00:04:49.038 START TEST env_memory 00:04:49.038 ************************************ 00:04:49.038 02:53:55 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.038 00:04:49.038 00:04:49.038 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.038 http://cunit.sourceforge.net/ 00:04:49.038 00:04:49.038 00:04:49.038 Suite: memory 00:04:49.038 Test: alloc and free memory map ...[2024-07-13 02:53:55.419637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.038 passed 00:04:49.038 Test: mem map translation ...[2024-07-13 02:53:55.479750] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.038 [2024-07-13 02:53:55.479814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.038 [2024-07-13 02:53:55.479923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.038 [2024-07-13 02:53:55.479957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.297 passed 00:04:49.297 Test: mem map registration ...[2024-07-13 02:53:55.577690] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.297 [2024-07-13 02:53:55.577760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.297 passed 00:04:49.297 Test: mem map adjacent registrations ...passed 00:04:49.297 00:04:49.297 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.297 suites 1 1 n/a 0 0 00:04:49.297 tests 4 4 4 0 0 00:04:49.297 asserts 152 152 152 0 n/a 00:04:49.297 00:04:49.297 Elapsed time = 0.341 seconds 00:04:49.297 00:04:49.297 real 0m0.381s 00:04:49.297 user 0m0.344s 00:04:49.297 sys 0m0.031s 00:04:49.297 02:53:55 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.297 02:53:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.297 ************************************ 00:04:49.297 END TEST env_memory 00:04:49.297 ************************************ 00:04:49.297 02:53:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.297 02:53:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.297 02:53:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.297 02:53:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.297 02:53:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.297 ************************************ 00:04:49.297 START TEST env_vtophys 00:04:49.297 ************************************ 00:04:49.297 02:53:55 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.556 EAL: lib.eal log level changed from notice to debug 00:04:49.556 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 1 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 2 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 3 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 4 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 5 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 6 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 7 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 8 as core 0 on socket 0 00:04:49.556 EAL: Detected lcore 9 as core 0 on socket 0 00:04:49.556 EAL: Maximum logical cores by configuration: 128 00:04:49.556 EAL: Detected CPU lcores: 10 00:04:49.556 EAL: Detected NUMA nodes: 1 00:04:49.556 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.556 EAL: Detected shared linkage of DPDK 00:04:49.556 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.556 EAL: Selected IOVA mode 'PA' 00:04:49.556 EAL: Probing VFIO support... 00:04:49.556 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.556 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:49.556 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.556 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.556 EAL: Setting up physically contiguous memory... 00:04:49.556 EAL: Setting maximum number of open files to 524288 00:04:49.556 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.556 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.556 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.556 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.556 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.556 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.556 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.556 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.556 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.556 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.556 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.556 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.556 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.556 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.556 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.556 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.556 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.556 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.556 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.556 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.556 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.556 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.556 EAL: Hugepages will be freed exactly as allocated. 00:04:49.556 EAL: No shared files mode enabled, IPC is disabled 00:04:49.556 EAL: No shared files mode enabled, IPC is disabled 00:04:49.556 EAL: TSC frequency is ~2200000 KHz 00:04:49.556 EAL: Main lcore 0 is ready (tid=7f43d7411a40;cpuset=[0]) 00:04:49.556 EAL: Trying to obtain current memory policy. 00:04:49.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.556 EAL: Restoring previous memory policy: 0 00:04:49.556 EAL: request: mp_malloc_sync 00:04:49.556 EAL: No shared files mode enabled, IPC is disabled 00:04:49.556 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.556 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.556 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.556 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.556 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:49.556 00:04:49.556 00:04:49.556 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.556 http://cunit.sourceforge.net/ 00:04:49.556 00:04:49.556 00:04:49.556 Suite: components_suite 00:04:50.125 Test: vtophys_malloc_test ...passed 00:04:50.125 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was expanded by 4MB 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was shrunk by 4MB 00:04:50.125 EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was expanded by 6MB 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was shrunk by 6MB 00:04:50.125 EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was expanded by 10MB 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was shrunk by 10MB 00:04:50.125 EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was expanded by 18MB 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was shrunk by 18MB 00:04:50.125 EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.125 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.125 EAL: Trying to obtain current memory policy. 00:04:50.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.125 EAL: Restoring previous memory policy: 4 00:04:50.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.125 EAL: request: mp_malloc_sync 00:04:50.125 EAL: No shared files mode enabled, IPC is disabled 00:04:50.126 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.126 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.126 EAL: request: mp_malloc_sync 00:04:50.126 EAL: No shared files mode enabled, IPC is disabled 00:04:50.126 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.385 EAL: Trying to obtain current memory policy. 00:04:50.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.385 EAL: Restoring previous memory policy: 4 00:04:50.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.385 EAL: request: mp_malloc_sync 00:04:50.385 EAL: No shared files mode enabled, IPC is disabled 00:04:50.385 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.385 EAL: request: mp_malloc_sync 00:04:50.385 EAL: No shared files mode enabled, IPC is disabled 00:04:50.385 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.644 EAL: Trying to obtain current memory policy. 00:04:50.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.644 EAL: Restoring previous memory policy: 4 00:04:50.644 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.644 EAL: request: mp_malloc_sync 00:04:50.644 EAL: No shared files mode enabled, IPC is disabled 00:04:50.644 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.904 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.904 EAL: request: mp_malloc_sync 00:04:50.904 EAL: No shared files mode enabled, IPC is disabled 00:04:50.904 EAL: Heap on socket 0 was shrunk by 258MB 00:04:51.163 EAL: Trying to obtain current memory policy. 00:04:51.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.422 EAL: Restoring previous memory policy: 4 00:04:51.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.422 EAL: request: mp_malloc_sync 00:04:51.422 EAL: No shared files mode enabled, IPC is disabled 00:04:51.422 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.990 EAL: request: mp_malloc_sync 00:04:51.990 EAL: No shared files mode enabled, IPC is disabled 00:04:51.990 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.558 EAL: Trying to obtain current memory policy. 00:04:52.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.817 EAL: Restoring previous memory policy: 4 00:04:52.817 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.817 EAL: request: mp_malloc_sync 00:04:52.817 EAL: No shared files mode enabled, IPC is disabled 00:04:52.817 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.194 EAL: request: mp_malloc_sync 00:04:54.194 EAL: No shared files mode enabled, IPC is disabled 00:04:54.194 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.570 passed 00:04:55.570 00:04:55.570 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.570 suites 1 1 n/a 0 0 00:04:55.570 tests 2 2 2 0 0 00:04:55.570 asserts 5299 5299 5299 0 n/a 00:04:55.570 00:04:55.570 Elapsed time = 5.836 seconds 00:04:55.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.570 EAL: request: mp_malloc_sync 00:04:55.571 EAL: No shared files mode enabled, IPC is disabled 00:04:55.571 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.571 EAL: No shared files mode enabled, IPC is disabled 00:04:55.571 EAL: No shared files mode enabled, IPC is disabled 00:04:55.571 EAL: No shared files mode enabled, IPC is disabled 00:04:55.571 00:04:55.571 real 0m6.141s 00:04:55.571 user 0m5.356s 00:04:55.571 sys 0m0.639s 00:04:55.571 02:54:01 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.571 02:54:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.571 ************************************ 00:04:55.571 END TEST env_vtophys 00:04:55.571 ************************************ 00:04:55.571 02:54:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.571 02:54:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.571 02:54:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.571 02:54:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.571 02:54:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.571 ************************************ 00:04:55.571 START TEST env_pci 00:04:55.571 ************************************ 00:04:55.571 02:54:01 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:55.571 00:04:55.571 00:04:55.571 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.571 http://cunit.sourceforge.net/ 00:04:55.571 00:04:55.571 00:04:55.571 Suite: pci 00:04:55.571 Test: pci_hook ...[2024-07-13 02:54:02.004228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59340 has claimed it 00:04:55.571 passed 00:04:55.571 00:04:55.571 EAL: Cannot find device (10000:00:01.0) 00:04:55.571 EAL: Failed to attach device on primary process 00:04:55.571 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.571 suites 1 1 n/a 0 0 00:04:55.571 tests 1 1 1 0 0 00:04:55.571 asserts 25 25 25 0 n/a 00:04:55.571 00:04:55.571 Elapsed time = 0.005 seconds 00:04:55.571 00:04:55.571 real 0m0.071s 00:04:55.571 user 0m0.037s 00:04:55.571 sys 0m0.033s 00:04:55.571 02:54:02 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.571 02:54:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.571 ************************************ 00:04:55.571 END TEST env_pci 00:04:55.571 ************************************ 00:04:55.830 02:54:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.830 02:54:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.830 02:54:02 env -- env/env.sh@15 -- # uname 00:04:55.830 02:54:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.830 02:54:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.830 02:54:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.830 02:54:02 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:55.830 02:54:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.830 02:54:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.830 ************************************ 00:04:55.830 START TEST env_dpdk_post_init 00:04:55.830 ************************************ 00:04:55.830 02:54:02 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.830 EAL: Detected CPU lcores: 10 00:04:55.830 EAL: Detected NUMA nodes: 1 00:04:55.830 EAL: Detected shared linkage of DPDK 00:04:55.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.830 EAL: Selected IOVA mode 'PA' 00:04:55.830 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.109 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.109 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.109 Starting DPDK initialization... 00:04:56.109 Starting SPDK post initialization... 00:04:56.109 SPDK NVMe probe 00:04:56.109 Attaching to 0000:00:10.0 00:04:56.109 Attaching to 0000:00:11.0 00:04:56.109 Attached to 0000:00:10.0 00:04:56.109 Attached to 0000:00:11.0 00:04:56.109 Cleaning up... 00:04:56.109 00:04:56.109 real 0m0.278s 00:04:56.109 user 0m0.083s 00:04:56.109 sys 0m0.095s 00:04:56.109 02:54:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.109 02:54:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.109 ************************************ 00:04:56.109 END TEST env_dpdk_post_init 00:04:56.109 ************************************ 00:04:56.109 02:54:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.109 02:54:02 env -- env/env.sh@26 -- # uname 00:04:56.109 02:54:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.109 02:54:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.109 02:54:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.109 02:54:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.109 02:54:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.109 ************************************ 00:04:56.109 START TEST env_mem_callbacks 00:04:56.109 ************************************ 00:04:56.109 02:54:02 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.109 EAL: Detected CPU lcores: 10 00:04:56.109 EAL: Detected NUMA nodes: 1 00:04:56.109 EAL: Detected shared linkage of DPDK 00:04:56.109 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.109 EAL: Selected IOVA mode 'PA' 00:04:56.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.384 00:04:56.384 00:04:56.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.384 http://cunit.sourceforge.net/ 00:04:56.384 00:04:56.384 00:04:56.384 Suite: memory 00:04:56.384 Test: test ... 00:04:56.384 register 0x200000200000 2097152 00:04:56.384 malloc 3145728 00:04:56.384 register 0x200000400000 4194304 00:04:56.384 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.384 malloc 64 00:04:56.384 buf 0x2000004ffec0 len 64 PASSED 00:04:56.384 malloc 4194304 00:04:56.384 register 0x200000800000 6291456 00:04:56.384 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.384 free 0x2000004fffc0 3145728 00:04:56.384 free 0x2000004ffec0 64 00:04:56.384 unregister 0x200000400000 4194304 PASSED 00:04:56.384 free 0x2000009fffc0 4194304 00:04:56.384 unregister 0x200000800000 6291456 PASSED 00:04:56.384 malloc 8388608 00:04:56.384 register 0x200000400000 10485760 00:04:56.384 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.384 free 0x2000005fffc0 8388608 00:04:56.384 unregister 0x200000400000 10485760 PASSED 00:04:56.384 passed 00:04:56.384 00:04:56.385 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.385 suites 1 1 n/a 0 0 00:04:56.385 tests 1 1 1 0 0 00:04:56.385 asserts 15 15 15 0 n/a 00:04:56.385 00:04:56.385 Elapsed time = 0.059 seconds 00:04:56.385 00:04:56.385 real 0m0.265s 00:04:56.385 user 0m0.106s 00:04:56.385 sys 0m0.056s 00:04:56.385 02:54:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.385 02:54:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 ************************************ 00:04:56.385 END TEST env_mem_callbacks 00:04:56.385 ************************************ 00:04:56.385 02:54:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.385 ************************************ 00:04:56.385 END TEST env 00:04:56.385 ************************************ 00:04:56.385 00:04:56.385 real 0m7.493s 00:04:56.385 user 0m6.054s 00:04:56.385 sys 0m1.063s 00:04:56.385 02:54:02 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.385 02:54:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 02:54:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.385 02:54:02 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.385 02:54:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.385 02:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.385 02:54:02 -- common/autotest_common.sh@10 -- # set +x 00:04:56.385 ************************************ 00:04:56.385 START TEST rpc 00:04:56.385 ************************************ 00:04:56.385 02:54:02 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.385 * Looking for test storage... 00:04:56.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.642 02:54:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59459 00:04:56.642 02:54:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.642 02:54:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59459 00:04:56.642 02:54:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@829 -- # '[' -z 59459 ']' 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.642 02:54:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.642 [2024-07-13 02:54:03.013713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:56.642 [2024-07-13 02:54:03.013881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59459 ] 00:04:56.900 [2024-07-13 02:54:03.185123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.900 [2024-07-13 02:54:03.372380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.900 [2024-07-13 02:54:03.372438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59459' to capture a snapshot of events at runtime. 00:04:56.900 [2024-07-13 02:54:03.372481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.900 [2024-07-13 02:54:03.372492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.900 [2024-07-13 02:54:03.372505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59459 for offline analysis/debug. 00:04:56.900 [2024-07-13 02:54:03.372542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.159 [2024-07-13 02:54:03.521834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:57.726 02:54:03 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.726 02:54:03 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:57.726 02:54:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.726 02:54:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.726 02:54:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:57.726 02:54:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:57.726 02:54:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.726 02:54:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.726 02:54:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.726 ************************************ 00:04:57.726 START TEST rpc_integrity 00:04:57.726 ************************************ 00:04:57.726 02:54:03 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:57.726 02:54:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.726 02:54:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.726 02:54:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.726 02:54:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.726 02:54:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.726 02:54:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.726 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.726 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.726 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:57.726 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.726 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.726 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.726 { 00:04:57.726 "name": "Malloc0", 00:04:57.726 "aliases": [ 00:04:57.726 "24d4ef33-a221-45b8-af82-0f966e8c42fb" 00:04:57.726 ], 00:04:57.726 "product_name": "Malloc disk", 00:04:57.726 "block_size": 512, 00:04:57.726 "num_blocks": 16384, 00:04:57.726 "uuid": "24d4ef33-a221-45b8-af82-0f966e8c42fb", 00:04:57.726 "assigned_rate_limits": { 00:04:57.726 "rw_ios_per_sec": 0, 00:04:57.726 "rw_mbytes_per_sec": 0, 00:04:57.726 "r_mbytes_per_sec": 0, 00:04:57.726 "w_mbytes_per_sec": 0 00:04:57.726 }, 00:04:57.726 "claimed": false, 00:04:57.726 "zoned": false, 00:04:57.726 "supported_io_types": { 00:04:57.726 "read": true, 00:04:57.727 "write": true, 00:04:57.727 "unmap": true, 00:04:57.727 "flush": true, 00:04:57.727 "reset": true, 00:04:57.727 "nvme_admin": false, 00:04:57.727 "nvme_io": false, 00:04:57.727 "nvme_io_md": false, 00:04:57.727 "write_zeroes": true, 00:04:57.727 "zcopy": true, 00:04:57.727 "get_zone_info": false, 00:04:57.727 "zone_management": false, 00:04:57.727 "zone_append": false, 00:04:57.727 "compare": false, 00:04:57.727 "compare_and_write": false, 00:04:57.727 "abort": true, 00:04:57.727 "seek_hole": false, 00:04:57.727 "seek_data": false, 00:04:57.727 "copy": true, 00:04:57.727 "nvme_iov_md": false 00:04:57.727 }, 00:04:57.727 "memory_domains": [ 00:04:57.727 { 00:04:57.727 "dma_device_id": "system", 00:04:57.727 "dma_device_type": 1 00:04:57.727 }, 00:04:57.727 { 00:04:57.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.727 "dma_device_type": 2 00:04:57.727 } 00:04:57.727 ], 00:04:57.727 "driver_specific": {} 00:04:57.727 } 00:04:57.727 ]' 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.727 [2024-07-13 02:54:04.138031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:57.727 [2024-07-13 02:54:04.138118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.727 [2024-07-13 02:54:04.138154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:57.727 [2024-07-13 02:54:04.138171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.727 [2024-07-13 02:54:04.140543] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.727 [2024-07-13 02:54:04.140603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.727 Passthru0 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.727 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.727 { 00:04:57.727 "name": "Malloc0", 00:04:57.727 "aliases": [ 00:04:57.727 "24d4ef33-a221-45b8-af82-0f966e8c42fb" 00:04:57.727 ], 00:04:57.727 "product_name": "Malloc disk", 00:04:57.727 "block_size": 512, 00:04:57.727 "num_blocks": 16384, 00:04:57.727 "uuid": "24d4ef33-a221-45b8-af82-0f966e8c42fb", 00:04:57.727 "assigned_rate_limits": { 00:04:57.727 "rw_ios_per_sec": 0, 00:04:57.727 "rw_mbytes_per_sec": 0, 00:04:57.727 "r_mbytes_per_sec": 0, 00:04:57.727 "w_mbytes_per_sec": 0 00:04:57.727 }, 00:04:57.727 "claimed": true, 00:04:57.727 "claim_type": "exclusive_write", 00:04:57.727 "zoned": false, 00:04:57.727 "supported_io_types": { 00:04:57.727 "read": true, 00:04:57.727 "write": true, 00:04:57.727 "unmap": true, 00:04:57.727 "flush": true, 00:04:57.727 "reset": true, 00:04:57.727 "nvme_admin": false, 00:04:57.727 "nvme_io": false, 00:04:57.727 "nvme_io_md": false, 00:04:57.727 "write_zeroes": true, 00:04:57.727 "zcopy": true, 00:04:57.727 "get_zone_info": false, 00:04:57.727 "zone_management": false, 00:04:57.727 "zone_append": false, 00:04:57.727 "compare": false, 00:04:57.727 "compare_and_write": false, 00:04:57.727 "abort": true, 00:04:57.727 "seek_hole": false, 00:04:57.727 "seek_data": false, 00:04:57.727 "copy": true, 00:04:57.727 "nvme_iov_md": false 00:04:57.727 }, 00:04:57.727 "memory_domains": [ 00:04:57.727 { 00:04:57.727 "dma_device_id": "system", 00:04:57.727 "dma_device_type": 1 00:04:57.727 }, 00:04:57.727 { 00:04:57.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.727 "dma_device_type": 2 00:04:57.727 } 00:04:57.727 ], 00:04:57.727 "driver_specific": {} 00:04:57.727 }, 00:04:57.727 { 00:04:57.727 "name": "Passthru0", 00:04:57.727 "aliases": [ 00:04:57.727 "19375aa4-124f-5d2f-815f-47430917cbdd" 00:04:57.727 ], 00:04:57.727 "product_name": "passthru", 00:04:57.727 "block_size": 512, 00:04:57.727 "num_blocks": 16384, 00:04:57.727 "uuid": "19375aa4-124f-5d2f-815f-47430917cbdd", 00:04:57.727 "assigned_rate_limits": { 00:04:57.727 "rw_ios_per_sec": 0, 00:04:57.727 "rw_mbytes_per_sec": 0, 00:04:57.727 "r_mbytes_per_sec": 0, 00:04:57.727 "w_mbytes_per_sec": 0 00:04:57.727 }, 00:04:57.727 "claimed": false, 00:04:57.727 "zoned": false, 00:04:57.727 "supported_io_types": { 00:04:57.727 "read": true, 00:04:57.727 "write": true, 00:04:57.727 "unmap": true, 00:04:57.727 "flush": true, 00:04:57.727 "reset": true, 00:04:57.727 "nvme_admin": false, 00:04:57.727 "nvme_io": false, 00:04:57.727 "nvme_io_md": false, 00:04:57.727 "write_zeroes": true, 00:04:57.727 "zcopy": true, 00:04:57.727 "get_zone_info": false, 00:04:57.727 "zone_management": false, 00:04:57.727 "zone_append": false, 00:04:57.727 "compare": false, 00:04:57.727 "compare_and_write": false, 00:04:57.727 "abort": true, 00:04:57.727 "seek_hole": false, 00:04:57.727 "seek_data": false, 00:04:57.727 "copy": true, 00:04:57.727 "nvme_iov_md": false 00:04:57.727 }, 00:04:57.727 "memory_domains": [ 00:04:57.727 { 00:04:57.727 "dma_device_id": "system", 00:04:57.727 "dma_device_type": 1 00:04:57.727 }, 00:04:57.727 { 00:04:57.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.727 "dma_device_type": 2 00:04:57.727 } 00:04:57.727 ], 00:04:57.727 "driver_specific": { 00:04:57.727 "passthru": { 00:04:57.727 "name": "Passthru0", 00:04:57.727 "base_bdev_name": "Malloc0" 00:04:57.727 } 00:04:57.727 } 00:04:57.727 } 00:04:57.727 ]' 00:04:57.727 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.986 ************************************ 00:04:57.986 END TEST rpc_integrity 00:04:57.986 ************************************ 00:04:57.986 02:54:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.986 00:04:57.986 real 0m0.346s 00:04:57.986 user 0m0.218s 00:04:57.986 sys 0m0.040s 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.986 02:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 02:54:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.986 02:54:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.986 02:54:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.986 02:54:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.986 02:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 ************************************ 00:04:57.986 START TEST rpc_plugins 00:04:57.986 ************************************ 00:04:57.986 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:57.986 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.986 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.986 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.986 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.986 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.986 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.987 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.987 { 00:04:57.987 "name": "Malloc1", 00:04:57.987 "aliases": [ 00:04:57.987 "220c3ca2-93da-4ae8-b8a2-fb3acb5c798a" 00:04:57.987 ], 00:04:57.987 "product_name": "Malloc disk", 00:04:57.987 "block_size": 4096, 00:04:57.987 "num_blocks": 256, 00:04:57.987 "uuid": "220c3ca2-93da-4ae8-b8a2-fb3acb5c798a", 00:04:57.987 "assigned_rate_limits": { 00:04:57.987 "rw_ios_per_sec": 0, 00:04:57.987 "rw_mbytes_per_sec": 0, 00:04:57.987 "r_mbytes_per_sec": 0, 00:04:57.987 "w_mbytes_per_sec": 0 00:04:57.987 }, 00:04:57.987 "claimed": false, 00:04:57.987 "zoned": false, 00:04:57.987 "supported_io_types": { 00:04:57.987 "read": true, 00:04:57.987 "write": true, 00:04:57.987 "unmap": true, 00:04:57.987 "flush": true, 00:04:57.987 "reset": true, 00:04:57.987 "nvme_admin": false, 00:04:57.987 "nvme_io": false, 00:04:57.987 "nvme_io_md": false, 00:04:57.987 "write_zeroes": true, 00:04:57.987 "zcopy": true, 00:04:57.987 "get_zone_info": false, 00:04:57.987 "zone_management": false, 00:04:57.987 "zone_append": false, 00:04:57.987 "compare": false, 00:04:57.987 "compare_and_write": false, 00:04:57.987 "abort": true, 00:04:57.987 "seek_hole": false, 00:04:57.987 "seek_data": false, 00:04:57.987 "copy": true, 00:04:57.987 "nvme_iov_md": false 00:04:57.987 }, 00:04:57.987 "memory_domains": [ 00:04:57.987 { 00:04:57.987 "dma_device_id": "system", 00:04:57.987 "dma_device_type": 1 00:04:57.987 }, 00:04:57.987 { 00:04:57.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.987 "dma_device_type": 2 00:04:57.987 } 00:04:57.987 ], 00:04:57.987 "driver_specific": {} 00:04:57.987 } 00:04:57.987 ]' 00:04:57.987 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.987 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.987 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.987 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.987 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.246 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.246 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.246 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:58.246 02:54:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.246 00:04:58.246 real 0m0.162s 00:04:58.246 user 0m0.108s 00:04:58.246 sys 0m0.019s 00:04:58.246 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.246 02:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.246 ************************************ 00:04:58.246 END TEST rpc_plugins 00:04:58.246 ************************************ 00:04:58.246 02:54:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.246 02:54:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.246 02:54:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.246 02:54:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.246 02:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.246 ************************************ 00:04:58.246 START TEST rpc_trace_cmd_test 00:04:58.246 ************************************ 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:58.246 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59459", 00:04:58.246 "tpoint_group_mask": "0x8", 00:04:58.246 "iscsi_conn": { 00:04:58.246 "mask": "0x2", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "scsi": { 00:04:58.246 "mask": "0x4", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "bdev": { 00:04:58.246 "mask": "0x8", 00:04:58.246 "tpoint_mask": "0xffffffffffffffff" 00:04:58.246 }, 00:04:58.246 "nvmf_rdma": { 00:04:58.246 "mask": "0x10", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "nvmf_tcp": { 00:04:58.246 "mask": "0x20", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "ftl": { 00:04:58.246 "mask": "0x40", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "blobfs": { 00:04:58.246 "mask": "0x80", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "dsa": { 00:04:58.246 "mask": "0x200", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "thread": { 00:04:58.246 "mask": "0x400", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "nvme_pcie": { 00:04:58.246 "mask": "0x800", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "iaa": { 00:04:58.246 "mask": "0x1000", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "nvme_tcp": { 00:04:58.246 "mask": "0x2000", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "bdev_nvme": { 00:04:58.246 "mask": "0x4000", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 }, 00:04:58.246 "sock": { 00:04:58.246 "mask": "0x8000", 00:04:58.246 "tpoint_mask": "0x0" 00:04:58.246 } 00:04:58.246 }' 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.246 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.506 ************************************ 00:04:58.506 END TEST rpc_trace_cmd_test 00:04:58.506 ************************************ 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.506 00:04:58.506 real 0m0.279s 00:04:58.506 user 0m0.243s 00:04:58.506 sys 0m0.027s 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.506 02:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:58.506 02:54:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:58.506 02:54:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.506 02:54:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.506 02:54:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.506 02:54:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.506 02:54:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.506 02:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.506 ************************************ 00:04:58.506 START TEST rpc_daemon_integrity 00:04:58.506 ************************************ 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.506 02:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.766 { 00:04:58.766 "name": "Malloc2", 00:04:58.766 "aliases": [ 00:04:58.766 "8ad7fc22-5a6d-48be-8e10-207e60577d11" 00:04:58.766 ], 00:04:58.766 "product_name": "Malloc disk", 00:04:58.766 "block_size": 512, 00:04:58.766 "num_blocks": 16384, 00:04:58.766 "uuid": "8ad7fc22-5a6d-48be-8e10-207e60577d11", 00:04:58.766 "assigned_rate_limits": { 00:04:58.766 "rw_ios_per_sec": 0, 00:04:58.766 "rw_mbytes_per_sec": 0, 00:04:58.766 "r_mbytes_per_sec": 0, 00:04:58.766 "w_mbytes_per_sec": 0 00:04:58.766 }, 00:04:58.766 "claimed": false, 00:04:58.766 "zoned": false, 00:04:58.766 "supported_io_types": { 00:04:58.766 "read": true, 00:04:58.766 "write": true, 00:04:58.766 "unmap": true, 00:04:58.766 "flush": true, 00:04:58.766 "reset": true, 00:04:58.766 "nvme_admin": false, 00:04:58.766 "nvme_io": false, 00:04:58.766 "nvme_io_md": false, 00:04:58.766 "write_zeroes": true, 00:04:58.766 "zcopy": true, 00:04:58.766 "get_zone_info": false, 00:04:58.766 "zone_management": false, 00:04:58.766 "zone_append": false, 00:04:58.766 "compare": false, 00:04:58.766 "compare_and_write": false, 00:04:58.766 "abort": true, 00:04:58.766 "seek_hole": false, 00:04:58.766 "seek_data": false, 00:04:58.766 "copy": true, 00:04:58.766 "nvme_iov_md": false 00:04:58.766 }, 00:04:58.766 "memory_domains": [ 00:04:58.766 { 00:04:58.766 "dma_device_id": "system", 00:04:58.766 "dma_device_type": 1 00:04:58.766 }, 00:04:58.766 { 00:04:58.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.766 "dma_device_type": 2 00:04:58.766 } 00:04:58.766 ], 00:04:58.766 "driver_specific": {} 00:04:58.766 } 00:04:58.766 ]' 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.766 [2024-07-13 02:54:05.085238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:58.766 [2024-07-13 02:54:05.085341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.766 [2024-07-13 02:54:05.085367] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:58.766 [2024-07-13 02:54:05.085380] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.766 [2024-07-13 02:54:05.087872] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.766 [2024-07-13 02:54:05.087973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.766 Passthru0 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.766 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.766 { 00:04:58.766 "name": "Malloc2", 00:04:58.766 "aliases": [ 00:04:58.766 "8ad7fc22-5a6d-48be-8e10-207e60577d11" 00:04:58.766 ], 00:04:58.766 "product_name": "Malloc disk", 00:04:58.766 "block_size": 512, 00:04:58.766 "num_blocks": 16384, 00:04:58.766 "uuid": "8ad7fc22-5a6d-48be-8e10-207e60577d11", 00:04:58.766 "assigned_rate_limits": { 00:04:58.766 "rw_ios_per_sec": 0, 00:04:58.766 "rw_mbytes_per_sec": 0, 00:04:58.766 "r_mbytes_per_sec": 0, 00:04:58.766 "w_mbytes_per_sec": 0 00:04:58.766 }, 00:04:58.766 "claimed": true, 00:04:58.766 "claim_type": "exclusive_write", 00:04:58.766 "zoned": false, 00:04:58.766 "supported_io_types": { 00:04:58.766 "read": true, 00:04:58.766 "write": true, 00:04:58.766 "unmap": true, 00:04:58.766 "flush": true, 00:04:58.766 "reset": true, 00:04:58.766 "nvme_admin": false, 00:04:58.766 "nvme_io": false, 00:04:58.766 "nvme_io_md": false, 00:04:58.767 "write_zeroes": true, 00:04:58.767 "zcopy": true, 00:04:58.767 "get_zone_info": false, 00:04:58.767 "zone_management": false, 00:04:58.767 "zone_append": false, 00:04:58.767 "compare": false, 00:04:58.767 "compare_and_write": false, 00:04:58.767 "abort": true, 00:04:58.767 "seek_hole": false, 00:04:58.767 "seek_data": false, 00:04:58.767 "copy": true, 00:04:58.767 "nvme_iov_md": false 00:04:58.767 }, 00:04:58.767 "memory_domains": [ 00:04:58.767 { 00:04:58.767 "dma_device_id": "system", 00:04:58.767 "dma_device_type": 1 00:04:58.767 }, 00:04:58.767 { 00:04:58.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.767 "dma_device_type": 2 00:04:58.767 } 00:04:58.767 ], 00:04:58.767 "driver_specific": {} 00:04:58.767 }, 00:04:58.767 { 00:04:58.767 "name": "Passthru0", 00:04:58.767 "aliases": [ 00:04:58.767 "c698a2a4-fc7d-56e7-aafc-a98c820e0a66" 00:04:58.767 ], 00:04:58.767 "product_name": "passthru", 00:04:58.767 "block_size": 512, 00:04:58.767 "num_blocks": 16384, 00:04:58.767 "uuid": "c698a2a4-fc7d-56e7-aafc-a98c820e0a66", 00:04:58.767 "assigned_rate_limits": { 00:04:58.767 "rw_ios_per_sec": 0, 00:04:58.767 "rw_mbytes_per_sec": 0, 00:04:58.767 "r_mbytes_per_sec": 0, 00:04:58.767 "w_mbytes_per_sec": 0 00:04:58.767 }, 00:04:58.767 "claimed": false, 00:04:58.767 "zoned": false, 00:04:58.767 "supported_io_types": { 00:04:58.767 "read": true, 00:04:58.767 "write": true, 00:04:58.767 "unmap": true, 00:04:58.767 "flush": true, 00:04:58.767 "reset": true, 00:04:58.767 "nvme_admin": false, 00:04:58.767 "nvme_io": false, 00:04:58.767 "nvme_io_md": false, 00:04:58.767 "write_zeroes": true, 00:04:58.767 "zcopy": true, 00:04:58.767 "get_zone_info": false, 00:04:58.767 "zone_management": false, 00:04:58.767 "zone_append": false, 00:04:58.767 "compare": false, 00:04:58.767 "compare_and_write": false, 00:04:58.767 "abort": true, 00:04:58.767 "seek_hole": false, 00:04:58.767 "seek_data": false, 00:04:58.767 "copy": true, 00:04:58.767 "nvme_iov_md": false 00:04:58.767 }, 00:04:58.767 "memory_domains": [ 00:04:58.767 { 00:04:58.767 "dma_device_id": "system", 00:04:58.767 "dma_device_type": 1 00:04:58.767 }, 00:04:58.767 { 00:04:58.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.767 "dma_device_type": 2 00:04:58.767 } 00:04:58.767 ], 00:04:58.767 "driver_specific": { 00:04:58.767 "passthru": { 00:04:58.767 "name": "Passthru0", 00:04:58.767 "base_bdev_name": "Malloc2" 00:04:58.767 } 00:04:58.767 } 00:04:58.767 } 00:04:58.767 ]' 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.767 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.026 ************************************ 00:04:59.026 END TEST rpc_daemon_integrity 00:04:59.026 ************************************ 00:04:59.026 02:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.026 00:04:59.026 real 0m0.347s 00:04:59.026 user 0m0.228s 00:04:59.026 sys 0m0.036s 00:04:59.026 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.026 02:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.026 02:54:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.026 02:54:05 rpc -- rpc/rpc.sh@84 -- # killprocess 59459 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@948 -- # '[' -z 59459 ']' 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@952 -- # kill -0 59459 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@953 -- # uname 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59459 00:04:59.026 killing process with pid 59459 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59459' 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@967 -- # kill 59459 00:04:59.026 02:54:05 rpc -- common/autotest_common.sh@972 -- # wait 59459 00:05:00.931 ************************************ 00:05:00.931 END TEST rpc 00:05:00.931 ************************************ 00:05:00.931 00:05:00.931 real 0m4.298s 00:05:00.931 user 0m5.136s 00:05:00.931 sys 0m0.701s 00:05:00.931 02:54:07 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.931 02:54:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.931 02:54:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.931 02:54:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.931 02:54:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.931 02:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.931 02:54:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.931 ************************************ 00:05:00.931 START TEST skip_rpc 00:05:00.931 ************************************ 00:05:00.931 02:54:07 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.931 * Looking for test storage... 00:05:00.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.931 02:54:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.931 02:54:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.931 02:54:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.931 02:54:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.931 02:54:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.931 02:54:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.931 ************************************ 00:05:00.931 START TEST skip_rpc 00:05:00.931 ************************************ 00:05:00.931 02:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:00.931 02:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59669 00:05:00.931 02:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.931 02:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.931 02:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.931 [2024-07-13 02:54:07.359369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:00.931 [2024-07-13 02:54:07.359544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:05:01.190 [2024-07-13 02:54:07.528314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.449 [2024-07-13 02:54:07.699949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.449 [2024-07-13 02:54:07.875553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59669 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59669 ']' 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59669 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59669 00:05:06.714 killing process with pid 59669 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59669' 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59669 00:05:06.714 02:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59669 00:05:08.089 00:05:08.089 real 0m6.909s 00:05:08.089 user 0m6.494s 00:05:08.089 sys 0m0.313s 00:05:08.089 02:54:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.089 02:54:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.089 ************************************ 00:05:08.089 END TEST skip_rpc 00:05:08.089 ************************************ 00:05:08.089 02:54:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.089 02:54:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.089 02:54:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.089 02:54:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.089 02:54:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.089 ************************************ 00:05:08.089 START TEST skip_rpc_with_json 00:05:08.089 ************************************ 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59773 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59773 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59773 ']' 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.089 02:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.089 [2024-07-13 02:54:14.318723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:08.089 [2024-07-13 02:54:14.319255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:05:08.089 [2024-07-13 02:54:14.491521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.348 [2024-07-13 02:54:14.635629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.348 [2024-07-13 02:54:14.782827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.915 [2024-07-13 02:54:15.251807] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.915 request: 00:05:08.915 { 00:05:08.915 "trtype": "tcp", 00:05:08.915 "method": "nvmf_get_transports", 00:05:08.915 "req_id": 1 00:05:08.915 } 00:05:08.915 Got JSON-RPC error response 00:05:08.915 response: 00:05:08.915 { 00:05:08.915 "code": -19, 00:05:08.915 "message": "No such device" 00:05:08.915 } 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.915 [2024-07-13 02:54:15.263951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.915 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.173 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.173 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.173 { 00:05:09.173 "subsystems": [ 00:05:09.173 { 00:05:09.173 "subsystem": "vfio_user_target", 00:05:09.173 "config": null 00:05:09.173 }, 00:05:09.173 { 00:05:09.173 "subsystem": "keyring", 00:05:09.173 "config": [] 00:05:09.173 }, 00:05:09.173 { 00:05:09.173 "subsystem": "iobuf", 00:05:09.173 "config": [ 00:05:09.173 { 00:05:09.173 "method": "iobuf_set_options", 00:05:09.173 "params": { 00:05:09.173 "small_pool_count": 8192, 00:05:09.173 "large_pool_count": 1024, 00:05:09.173 "small_bufsize": 8192, 00:05:09.173 "large_bufsize": 135168 00:05:09.173 } 00:05:09.173 } 00:05:09.173 ] 00:05:09.173 }, 00:05:09.173 { 00:05:09.173 "subsystem": "sock", 00:05:09.173 "config": [ 00:05:09.173 { 00:05:09.173 "method": "sock_set_default_impl", 00:05:09.173 "params": { 00:05:09.173 "impl_name": "uring" 00:05:09.173 } 00:05:09.173 }, 00:05:09.173 { 00:05:09.173 "method": "sock_impl_set_options", 00:05:09.173 "params": { 00:05:09.173 "impl_name": "ssl", 00:05:09.173 "recv_buf_size": 4096, 00:05:09.173 "send_buf_size": 4096, 00:05:09.173 "enable_recv_pipe": true, 00:05:09.173 "enable_quickack": false, 00:05:09.173 "enable_placement_id": 0, 00:05:09.173 "enable_zerocopy_send_server": true, 00:05:09.173 "enable_zerocopy_send_client": false, 00:05:09.173 "zerocopy_threshold": 0, 00:05:09.173 "tls_version": 0, 00:05:09.173 "enable_ktls": false 00:05:09.173 } 00:05:09.173 }, 00:05:09.173 { 00:05:09.173 "method": "sock_impl_set_options", 00:05:09.173 "params": { 00:05:09.173 "impl_name": "posix", 00:05:09.173 "recv_buf_size": 2097152, 00:05:09.173 "send_buf_size": 2097152, 00:05:09.173 "enable_recv_pipe": true, 00:05:09.173 "enable_quickack": false, 00:05:09.173 "enable_placement_id": 0, 00:05:09.173 "enable_zerocopy_send_server": true, 00:05:09.173 "enable_zerocopy_send_client": false, 00:05:09.173 "zerocopy_threshold": 0, 00:05:09.173 "tls_version": 0, 00:05:09.173 "enable_ktls": false 00:05:09.173 } 00:05:09.173 }, 00:05:09.174 { 00:05:09.174 "method": "sock_impl_set_options", 00:05:09.174 "params": { 00:05:09.174 "impl_name": "uring", 00:05:09.174 "recv_buf_size": 2097152, 00:05:09.174 "send_buf_size": 2097152, 00:05:09.174 "enable_recv_pipe": true, 00:05:09.174 "enable_quickack": false, 00:05:09.174 "enable_placement_id": 0, 00:05:09.174 "enable_zerocopy_send_server": false, 00:05:09.174 "enable_zerocopy_send_client": false, 00:05:09.174 "zerocopy_threshold": 0, 00:05:09.174 "tls_version": 0, 00:05:09.174 "enable_ktls": false 00:05:09.174 } 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "vmd", 00:05:09.174 "config": [] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "accel", 00:05:09.174 "config": [ 00:05:09.174 { 00:05:09.174 "method": "accel_set_options", 00:05:09.174 "params": { 00:05:09.174 "small_cache_size": 128, 00:05:09.174 "large_cache_size": 16, 00:05:09.174 "task_count": 2048, 00:05:09.174 "sequence_count": 2048, 00:05:09.174 "buf_count": 2048 00:05:09.174 } 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "bdev", 00:05:09.174 "config": [ 00:05:09.174 { 00:05:09.174 "method": "bdev_set_options", 00:05:09.174 "params": { 00:05:09.174 "bdev_io_pool_size": 65535, 00:05:09.174 "bdev_io_cache_size": 256, 00:05:09.174 "bdev_auto_examine": true, 00:05:09.174 "iobuf_small_cache_size": 128, 00:05:09.174 "iobuf_large_cache_size": 16 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "bdev_raid_set_options", 00:05:09.174 "params": { 00:05:09.174 "process_window_size_kb": 1024 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "bdev_iscsi_set_options", 00:05:09.174 "params": { 00:05:09.174 "timeout_sec": 30 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "bdev_nvme_set_options", 00:05:09.174 "params": { 00:05:09.174 "action_on_timeout": "none", 00:05:09.174 "timeout_us": 0, 00:05:09.174 "timeout_admin_us": 0, 00:05:09.174 "keep_alive_timeout_ms": 10000, 00:05:09.174 "arbitration_burst": 0, 00:05:09.174 "low_priority_weight": 0, 00:05:09.174 "medium_priority_weight": 0, 00:05:09.174 "high_priority_weight": 0, 00:05:09.174 "nvme_adminq_poll_period_us": 10000, 00:05:09.174 "nvme_ioq_poll_period_us": 0, 00:05:09.174 "io_queue_requests": 0, 00:05:09.174 "delay_cmd_submit": true, 00:05:09.174 "transport_retry_count": 4, 00:05:09.174 "bdev_retry_count": 3, 00:05:09.174 "transport_ack_timeout": 0, 00:05:09.174 "ctrlr_loss_timeout_sec": 0, 00:05:09.174 "reconnect_delay_sec": 0, 00:05:09.174 "fast_io_fail_timeout_sec": 0, 00:05:09.174 "disable_auto_failback": false, 00:05:09.174 "generate_uuids": false, 00:05:09.174 "transport_tos": 0, 00:05:09.174 "nvme_error_stat": false, 00:05:09.174 "rdma_srq_size": 0, 00:05:09.174 "io_path_stat": false, 00:05:09.174 "allow_accel_sequence": false, 00:05:09.174 "rdma_max_cq_size": 0, 00:05:09.174 "rdma_cm_event_timeout_ms": 0, 00:05:09.174 "dhchap_digests": [ 00:05:09.174 "sha256", 00:05:09.174 "sha384", 00:05:09.174 "sha512" 00:05:09.174 ], 00:05:09.174 "dhchap_dhgroups": [ 00:05:09.174 "null", 00:05:09.174 "ffdhe2048", 00:05:09.174 "ffdhe3072", 00:05:09.174 "ffdhe4096", 00:05:09.174 "ffdhe6144", 00:05:09.174 "ffdhe8192" 00:05:09.174 ] 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "bdev_nvme_set_hotplug", 00:05:09.174 "params": { 00:05:09.174 "period_us": 100000, 00:05:09.174 "enable": false 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "bdev_wait_for_examine" 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "scsi", 00:05:09.174 "config": null 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "scheduler", 00:05:09.174 "config": [ 00:05:09.174 { 00:05:09.174 "method": "framework_set_scheduler", 00:05:09.174 "params": { 00:05:09.174 "name": "static" 00:05:09.174 } 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "vhost_scsi", 00:05:09.174 "config": [] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "vhost_blk", 00:05:09.174 "config": [] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "ublk", 00:05:09.174 "config": [] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "nbd", 00:05:09.174 "config": [] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "nvmf", 00:05:09.174 "config": [ 00:05:09.174 { 00:05:09.174 "method": "nvmf_set_config", 00:05:09.174 "params": { 00:05:09.174 "discovery_filter": "match_any", 00:05:09.174 "admin_cmd_passthru": { 00:05:09.174 "identify_ctrlr": false 00:05:09.174 } 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "nvmf_set_max_subsystems", 00:05:09.174 "params": { 00:05:09.174 "max_subsystems": 1024 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "nvmf_set_crdt", 00:05:09.174 "params": { 00:05:09.174 "crdt1": 0, 00:05:09.174 "crdt2": 0, 00:05:09.174 "crdt3": 0 00:05:09.174 } 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "method": "nvmf_create_transport", 00:05:09.174 "params": { 00:05:09.174 "trtype": "TCP", 00:05:09.174 "max_queue_depth": 128, 00:05:09.174 "max_io_qpairs_per_ctrlr": 127, 00:05:09.174 "in_capsule_data_size": 4096, 00:05:09.174 "max_io_size": 131072, 00:05:09.174 "io_unit_size": 131072, 00:05:09.174 "max_aq_depth": 128, 00:05:09.174 "num_shared_buffers": 511, 00:05:09.174 "buf_cache_size": 4294967295, 00:05:09.174 "dif_insert_or_strip": false, 00:05:09.174 "zcopy": false, 00:05:09.174 "c2h_success": true, 00:05:09.174 "sock_priority": 0, 00:05:09.174 "abort_timeout_sec": 1, 00:05:09.174 "ack_timeout": 0, 00:05:09.174 "data_wr_pool_size": 0 00:05:09.174 } 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 }, 00:05:09.174 { 00:05:09.174 "subsystem": "iscsi", 00:05:09.174 "config": [ 00:05:09.174 { 00:05:09.174 "method": "iscsi_set_options", 00:05:09.174 "params": { 00:05:09.174 "node_base": "iqn.2016-06.io.spdk", 00:05:09.174 "max_sessions": 128, 00:05:09.174 "max_connections_per_session": 2, 00:05:09.174 "max_queue_depth": 64, 00:05:09.174 "default_time2wait": 2, 00:05:09.174 "default_time2retain": 20, 00:05:09.174 "first_burst_length": 8192, 00:05:09.174 "immediate_data": true, 00:05:09.174 "allow_duplicated_isid": false, 00:05:09.174 "error_recovery_level": 0, 00:05:09.174 "nop_timeout": 60, 00:05:09.174 "nop_in_interval": 30, 00:05:09.174 "disable_chap": false, 00:05:09.174 "require_chap": false, 00:05:09.174 "mutual_chap": false, 00:05:09.174 "chap_group": 0, 00:05:09.174 "max_large_datain_per_connection": 64, 00:05:09.174 "max_r2t_per_connection": 4, 00:05:09.174 "pdu_pool_size": 36864, 00:05:09.174 "immediate_data_pool_size": 16384, 00:05:09.174 "data_out_pool_size": 2048 00:05:09.174 } 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 } 00:05:09.174 ] 00:05:09.174 } 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59773 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59773 ']' 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59773 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59773 00:05:09.174 killing process with pid 59773 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59773' 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59773 00:05:09.174 02:54:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59773 00:05:11.104 02:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59818 00:05:11.104 02:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.104 02:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59818 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59818 ']' 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59818 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59818 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59818' 00:05:16.415 killing process with pid 59818 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59818 00:05:16.415 02:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59818 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.792 ************************************ 00:05:17.792 END TEST skip_rpc_with_json 00:05:17.792 ************************************ 00:05:17.792 00:05:17.792 real 0m9.879s 00:05:17.792 user 0m9.564s 00:05:17.792 sys 0m0.689s 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.792 02:54:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.792 02:54:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.792 02:54:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.792 02:54:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.792 02:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.792 ************************************ 00:05:17.792 START TEST skip_rpc_with_delay 00:05:17.792 ************************************ 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.792 [2024-07-13 02:54:24.222896] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.792 [2024-07-13 02:54:24.223029] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.792 00:05:17.792 real 0m0.150s 00:05:17.792 user 0m0.088s 00:05:17.792 sys 0m0.061s 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.792 ************************************ 00:05:17.792 END TEST skip_rpc_with_delay 00:05:17.792 ************************************ 00:05:17.792 02:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:18.051 02:54:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.051 02:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:18.051 02:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:18.051 02:54:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:18.051 02:54:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.051 02:54:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.051 02:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.051 ************************************ 00:05:18.051 START TEST exit_on_failed_rpc_init 00:05:18.051 ************************************ 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:18.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59946 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59946 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59946 ']' 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.051 02:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.051 [2024-07-13 02:54:24.456455] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:18.051 [2024-07-13 02:54:24.457348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59946 ] 00:05:18.310 [2024-07-13 02:54:24.623180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.310 [2024-07-13 02:54:24.765038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.569 [2024-07-13 02:54:24.905165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.136 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.137 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.137 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.137 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:19.137 02:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.137 [2024-07-13 02:54:25.465420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:19.137 [2024-07-13 02:54:25.465625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:05:19.395 [2024-07-13 02:54:25.635748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.395 [2024-07-13 02:54:25.838134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.395 [2024-07-13 02:54:25.838249] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:19.395 [2024-07-13 02:54:25.838271] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:19.395 [2024-07-13 02:54:25.838297] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59946 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59946 ']' 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59946 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59946 00:05:19.963 killing process with pid 59946 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59946' 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59946 00:05:19.963 02:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59946 00:05:21.869 00:05:21.869 real 0m3.549s 00:05:21.869 user 0m4.154s 00:05:21.869 sys 0m0.466s 00:05:21.869 ************************************ 00:05:21.869 END TEST exit_on_failed_rpc_init 00:05:21.869 ************************************ 00:05:21.869 02:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.869 02:54:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 02:54:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:21.869 02:54:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.869 00:05:21.869 real 0m20.780s 00:05:21.869 user 0m20.386s 00:05:21.869 sys 0m1.719s 00:05:21.869 02:54:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.869 ************************************ 00:05:21.869 END TEST skip_rpc 00:05:21.869 ************************************ 00:05:21.869 02:54:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 02:54:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.869 02:54:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:21.869 02:54:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.869 02:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.869 02:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 ************************************ 00:05:21.869 START TEST rpc_client 00:05:21.869 ************************************ 00:05:21.869 02:54:27 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:21.869 * Looking for test storage... 00:05:21.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:21.869 02:54:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:21.869 OK 00:05:21.869 02:54:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:21.869 00:05:21.869 real 0m0.143s 00:05:21.869 user 0m0.066s 00:05:21.869 sys 0m0.082s 00:05:21.869 02:54:28 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.869 02:54:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 ************************************ 00:05:21.869 END TEST rpc_client 00:05:21.869 ************************************ 00:05:21.869 02:54:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.869 02:54:28 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:21.869 02:54:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.869 02:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.869 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 ************************************ 00:05:21.869 START TEST json_config 00:05:21.869 ************************************ 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.869 02:54:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.869 02:54:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.869 02:54:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.869 02:54:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.869 02:54:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.869 02:54:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.869 02:54:28 json_config -- paths/export.sh@5 -- # export PATH 00:05:21.869 02:54:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@47 -- # : 0 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.869 02:54:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:21.869 INFO: JSON configuration test init 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.869 Waiting for target to run... 00:05:21.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.869 02:54:28 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:21.869 02:54:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.869 02:54:28 json_config -- json_config/common.sh@10 -- # shift 00:05:21.869 02:54:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.869 02:54:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.869 02:54:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.869 02:54:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.869 02:54:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.869 02:54:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60102 00:05:21.869 02:54:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.869 02:54:28 json_config -- json_config/common.sh@25 -- # waitforlisten 60102 /var/tmp/spdk_tgt.sock 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 60102 ']' 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.869 02:54:28 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.869 02:54:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.870 02:54:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.870 02:54:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.128 [2024-07-13 02:54:28.386122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.128 [2024-07-13 02:54:28.386545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60102 ] 00:05:22.387 [2024-07-13 02:54:28.747984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.669 [2024-07-13 02:54:28.934844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:22.928 02:54:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.928 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.928 02:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:22.928 02:54:29 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:22.928 02:54:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:23.494 [2024-07-13 02:54:29.718291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:24.061 02:54:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.061 02:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:24.061 02:54:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:24.061 02:54:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:24.319 02:54:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.319 02:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:24.319 02:54:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.319 02:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:24.319 02:54:30 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.319 02:54:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:24.578 MallocForNvmf0 00:05:24.578 02:54:30 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.578 02:54:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:24.837 MallocForNvmf1 00:05:24.837 02:54:31 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:24.837 02:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:24.837 [2024-07-13 02:54:31.318921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.095 02:54:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.095 02:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.095 02:54:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:25.095 02:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:25.353 02:54:31 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.353 02:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:25.612 02:54:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:25.612 02:54:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:25.871 [2024-07-13 02:54:32.131533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:25.871 02:54:32 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:25.871 02:54:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.871 02:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.871 02:54:32 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:25.871 02:54:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.871 02:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.871 02:54:32 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:25.871 02:54:32 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.871 02:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.129 MallocBdevForConfigChangeCheck 00:05:26.129 02:54:32 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:26.129 02:54:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.129 02:54:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.129 02:54:32 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:26.130 02:54:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.696 INFO: shutting down applications... 00:05:26.696 02:54:32 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:26.696 02:54:32 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:26.696 02:54:32 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:26.696 02:54:32 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:26.696 02:54:32 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:26.954 Calling clear_iscsi_subsystem 00:05:26.954 Calling clear_nvmf_subsystem 00:05:26.954 Calling clear_nbd_subsystem 00:05:26.954 Calling clear_ublk_subsystem 00:05:26.954 Calling clear_vhost_blk_subsystem 00:05:26.954 Calling clear_vhost_scsi_subsystem 00:05:26.954 Calling clear_bdev_subsystem 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:26.954 02:54:33 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:27.212 02:54:33 json_config -- json_config/json_config.sh@345 -- # break 00:05:27.212 02:54:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:27.212 02:54:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:27.213 02:54:33 json_config -- json_config/common.sh@31 -- # local app=target 00:05:27.213 02:54:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.213 02:54:33 json_config -- json_config/common.sh@35 -- # [[ -n 60102 ]] 00:05:27.213 02:54:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60102 00:05:27.213 02:54:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.213 02:54:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.213 02:54:33 json_config -- json_config/common.sh@41 -- # kill -0 60102 00:05:27.213 02:54:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.781 02:54:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.781 02:54:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.781 02:54:34 json_config -- json_config/common.sh@41 -- # kill -0 60102 00:05:27.781 02:54:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.374 02:54:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.374 02:54:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.374 02:54:34 json_config -- json_config/common.sh@41 -- # kill -0 60102 00:05:28.374 02:54:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.374 02:54:34 json_config -- json_config/common.sh@43 -- # break 00:05:28.374 02:54:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.374 SPDK target shutdown done 00:05:28.374 02:54:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.374 INFO: relaunching applications... 00:05:28.374 02:54:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:28.374 02:54:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.374 02:54:34 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.374 02:54:34 json_config -- json_config/common.sh@10 -- # shift 00:05:28.374 02:54:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.374 02:54:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.374 02:54:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.374 02:54:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.374 02:54:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.374 Waiting for target to run... 00:05:28.374 02:54:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60300 00:05:28.374 02:54:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.374 02:54:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.374 02:54:34 json_config -- json_config/common.sh@25 -- # waitforlisten 60300 /var/tmp/spdk_tgt.sock 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 60300 ']' 00:05:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.374 02:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.374 [2024-07-13 02:54:34.707280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:28.374 [2024-07-13 02:54:34.707470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:05:28.633 [2024-07-13 02:54:35.014054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.891 [2024-07-13 02:54:35.148957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.150 [2024-07-13 02:54:35.400730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.717 [2024-07-13 02:54:35.937696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.717 [2024-07-13 02:54:35.969770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.717 00:05:29.717 INFO: Checking if target configuration is the same... 00:05:29.717 02:54:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.717 02:54:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:29.717 02:54:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.717 02:54:36 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:29.717 02:54:36 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:29.717 02:54:36 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.717 02:54:36 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:29.717 02:54:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.717 + '[' 2 -ne 2 ']' 00:05:29.717 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:29.717 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:29.717 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:29.717 +++ basename /dev/fd/62 00:05:29.717 ++ mktemp /tmp/62.XXX 00:05:29.717 + tmp_file_1=/tmp/62.MsE 00:05:29.717 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.717 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.717 + tmp_file_2=/tmp/spdk_tgt_config.json.R0k 00:05:29.717 + ret=0 00:05:29.717 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.976 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.976 + diff -u /tmp/62.MsE /tmp/spdk_tgt_config.json.R0k 00:05:29.976 INFO: JSON config files are the same 00:05:29.976 + echo 'INFO: JSON config files are the same' 00:05:29.976 + rm /tmp/62.MsE /tmp/spdk_tgt_config.json.R0k 00:05:29.976 + exit 0 00:05:29.976 INFO: changing configuration and checking if this can be detected... 00:05:29.976 02:54:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:29.976 02:54:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.976 02:54:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.976 02:54:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.234 02:54:36 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.234 02:54:36 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:30.234 02:54:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.234 + '[' 2 -ne 2 ']' 00:05:30.493 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:30.493 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:30.493 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:30.493 +++ basename /dev/fd/62 00:05:30.493 ++ mktemp /tmp/62.XXX 00:05:30.493 + tmp_file_1=/tmp/62.fOJ 00:05:30.493 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.493 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:30.493 + tmp_file_2=/tmp/spdk_tgt_config.json.IXs 00:05:30.493 + ret=0 00:05:30.493 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:30.751 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:30.751 + diff -u /tmp/62.fOJ /tmp/spdk_tgt_config.json.IXs 00:05:30.751 + ret=1 00:05:30.751 + echo '=== Start of file: /tmp/62.fOJ ===' 00:05:30.751 + cat /tmp/62.fOJ 00:05:30.751 + echo '=== End of file: /tmp/62.fOJ ===' 00:05:30.751 + echo '' 00:05:30.751 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IXs ===' 00:05:30.751 + cat /tmp/spdk_tgt_config.json.IXs 00:05:30.751 + echo '=== End of file: /tmp/spdk_tgt_config.json.IXs ===' 00:05:30.751 + echo '' 00:05:30.751 + rm /tmp/62.fOJ /tmp/spdk_tgt_config.json.IXs 00:05:30.751 + exit 1 00:05:30.751 INFO: configuration change detected. 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@317 -- # [[ -n 60300 ]] 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:30.751 02:54:37 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.751 02:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.010 02:54:37 json_config -- json_config/json_config.sh@323 -- # killprocess 60300 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 60300 ']' 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@952 -- # kill -0 60300 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@953 -- # uname 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60300 00:05:31.010 killing process with pid 60300 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60300' 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@967 -- # kill 60300 00:05:31.010 02:54:37 json_config -- common/autotest_common.sh@972 -- # wait 60300 00:05:31.946 02:54:38 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.946 02:54:38 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:31.946 02:54:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.946 02:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 INFO: Success 00:05:31.946 02:54:38 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:31.946 02:54:38 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:31.946 ************************************ 00:05:31.946 END TEST json_config 00:05:31.946 ************************************ 00:05:31.946 00:05:31.946 real 0m9.957s 00:05:31.946 user 0m13.285s 00:05:31.946 sys 0m1.656s 00:05:31.946 02:54:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.946 02:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 02:54:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.946 02:54:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.946 02:54:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.946 02:54:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.946 02:54:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 ************************************ 00:05:31.946 START TEST json_config_extra_key 00:05:31.946 ************************************ 00:05:31.946 02:54:38 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.946 02:54:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.946 02:54:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.946 02:54:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.946 02:54:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.946 02:54:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.946 02:54:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.946 02:54:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.946 02:54:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.946 02:54:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.946 INFO: launching applications... 00:05:31.946 Waiting for target to run... 00:05:31.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.946 02:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.946 02:54:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.946 02:54:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.946 02:54:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.946 02:54:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.946 02:54:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60452 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60452 /var/tmp/spdk_tgt.sock 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60452 ']' 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.947 02:54:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.947 02:54:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.947 [2024-07-13 02:54:38.383704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.947 [2024-07-13 02:54:38.384659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:05:32.515 [2024-07-13 02:54:38.718038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.515 [2024-07-13 02:54:38.869878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.872 [2024-07-13 02:54:39.025778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.130 02:54:39 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.130 02:54:39 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:33.130 00:05:33.130 02:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:33.130 INFO: shutting down applications... 00:05:33.130 02:54:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60452 ]] 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60452 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:33.130 02:54:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.697 02:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.697 02:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.697 02:54:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:33.697 02:54:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.956 02:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.956 02:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.956 02:54:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:33.956 02:54:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.523 02:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.523 02:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.523 02:54:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:34.523 02:54:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.090 02:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.090 02:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.090 02:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:35.090 02:54:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.657 02:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.657 02:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.657 02:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60452 00:05:35.658 02:54:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.658 02:54:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.658 SPDK target shutdown done 00:05:35.658 Success 00:05:35.658 02:54:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.658 02:54:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.658 02:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.658 00:05:35.658 real 0m3.775s 00:05:35.658 user 0m3.266s 00:05:35.658 sys 0m0.428s 00:05:35.658 ************************************ 00:05:35.658 END TEST json_config_extra_key 00:05:35.658 ************************************ 00:05:35.658 02:54:41 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.658 02:54:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.658 02:54:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.658 02:54:41 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.658 02:54:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.658 02:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.658 02:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:35.658 ************************************ 00:05:35.658 START TEST alias_rpc 00:05:35.658 ************************************ 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.658 * Looking for test storage... 00:05:35.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:35.658 02:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.658 02:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60549 00:05:35.658 02:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.658 02:54:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60549 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60549 ']' 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.658 02:54:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.917 [2024-07-13 02:54:42.218237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:35.917 [2024-07-13 02:54:42.219236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60549 ] 00:05:35.917 [2024-07-13 02:54:42.396720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.175 [2024-07-13 02:54:42.557017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.444 [2024-07-13 02:54:42.707627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.706 02:54:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.706 02:54:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.706 02:54:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.965 02:54:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60549 00:05:36.965 02:54:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60549 ']' 00:05:36.965 02:54:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60549 00:05:36.965 02:54:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.965 02:54:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.965 02:54:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60549 00:05:37.224 killing process with pid 60549 00:05:37.224 02:54:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.224 02:54:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.224 02:54:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60549' 00:05:37.224 02:54:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 60549 00:05:37.224 02:54:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 60549 00:05:39.136 ************************************ 00:05:39.136 END TEST alias_rpc 00:05:39.136 ************************************ 00:05:39.136 00:05:39.136 real 0m3.184s 00:05:39.136 user 0m3.364s 00:05:39.136 sys 0m0.416s 00:05:39.136 02:54:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.136 02:54:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.136 02:54:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.136 02:54:45 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:39.136 02:54:45 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.136 02:54:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.136 02:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.136 02:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.136 ************************************ 00:05:39.136 START TEST spdkcli_tcp 00:05:39.136 ************************************ 00:05:39.136 02:54:45 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.136 * Looking for test storage... 00:05:39.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.136 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60637 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60637 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60637 ']' 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.137 02:54:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 02:54:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.137 [2024-07-13 02:54:45.448043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.137 [2024-07-13 02:54:45.449214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60637 ] 00:05:39.137 [2024-07-13 02:54:45.620434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.396 [2024-07-13 02:54:45.780233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.396 [2024-07-13 02:54:45.780246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.654 [2024-07-13 02:54:45.932906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.912 02:54:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.912 02:54:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:39.912 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60654 00:05:39.912 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:39.912 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.171 [ 00:05:40.171 "bdev_malloc_delete", 00:05:40.171 "bdev_malloc_create", 00:05:40.171 "bdev_null_resize", 00:05:40.171 "bdev_null_delete", 00:05:40.171 "bdev_null_create", 00:05:40.171 "bdev_nvme_cuse_unregister", 00:05:40.171 "bdev_nvme_cuse_register", 00:05:40.171 "bdev_opal_new_user", 00:05:40.171 "bdev_opal_set_lock_state", 00:05:40.171 "bdev_opal_delete", 00:05:40.171 "bdev_opal_get_info", 00:05:40.171 "bdev_opal_create", 00:05:40.171 "bdev_nvme_opal_revert", 00:05:40.171 "bdev_nvme_opal_init", 00:05:40.171 "bdev_nvme_send_cmd", 00:05:40.171 "bdev_nvme_get_path_iostat", 00:05:40.171 "bdev_nvme_get_mdns_discovery_info", 00:05:40.171 "bdev_nvme_stop_mdns_discovery", 00:05:40.171 "bdev_nvme_start_mdns_discovery", 00:05:40.171 "bdev_nvme_set_multipath_policy", 00:05:40.171 "bdev_nvme_set_preferred_path", 00:05:40.171 "bdev_nvme_get_io_paths", 00:05:40.171 "bdev_nvme_remove_error_injection", 00:05:40.171 "bdev_nvme_add_error_injection", 00:05:40.171 "bdev_nvme_get_discovery_info", 00:05:40.171 "bdev_nvme_stop_discovery", 00:05:40.171 "bdev_nvme_start_discovery", 00:05:40.171 "bdev_nvme_get_controller_health_info", 00:05:40.171 "bdev_nvme_disable_controller", 00:05:40.171 "bdev_nvme_enable_controller", 00:05:40.171 "bdev_nvme_reset_controller", 00:05:40.171 "bdev_nvme_get_transport_statistics", 00:05:40.171 "bdev_nvme_apply_firmware", 00:05:40.171 "bdev_nvme_detach_controller", 00:05:40.171 "bdev_nvme_get_controllers", 00:05:40.171 "bdev_nvme_attach_controller", 00:05:40.171 "bdev_nvme_set_hotplug", 00:05:40.171 "bdev_nvme_set_options", 00:05:40.171 "bdev_passthru_delete", 00:05:40.171 "bdev_passthru_create", 00:05:40.171 "bdev_lvol_set_parent_bdev", 00:05:40.171 "bdev_lvol_set_parent", 00:05:40.171 "bdev_lvol_check_shallow_copy", 00:05:40.171 "bdev_lvol_start_shallow_copy", 00:05:40.171 "bdev_lvol_grow_lvstore", 00:05:40.171 "bdev_lvol_get_lvols", 00:05:40.171 "bdev_lvol_get_lvstores", 00:05:40.171 "bdev_lvol_delete", 00:05:40.171 "bdev_lvol_set_read_only", 00:05:40.171 "bdev_lvol_resize", 00:05:40.171 "bdev_lvol_decouple_parent", 00:05:40.171 "bdev_lvol_inflate", 00:05:40.171 "bdev_lvol_rename", 00:05:40.171 "bdev_lvol_clone_bdev", 00:05:40.171 "bdev_lvol_clone", 00:05:40.171 "bdev_lvol_snapshot", 00:05:40.171 "bdev_lvol_create", 00:05:40.171 "bdev_lvol_delete_lvstore", 00:05:40.171 "bdev_lvol_rename_lvstore", 00:05:40.171 "bdev_lvol_create_lvstore", 00:05:40.171 "bdev_raid_set_options", 00:05:40.171 "bdev_raid_remove_base_bdev", 00:05:40.171 "bdev_raid_add_base_bdev", 00:05:40.171 "bdev_raid_delete", 00:05:40.171 "bdev_raid_create", 00:05:40.171 "bdev_raid_get_bdevs", 00:05:40.171 "bdev_error_inject_error", 00:05:40.171 "bdev_error_delete", 00:05:40.171 "bdev_error_create", 00:05:40.171 "bdev_split_delete", 00:05:40.171 "bdev_split_create", 00:05:40.171 "bdev_delay_delete", 00:05:40.171 "bdev_delay_create", 00:05:40.171 "bdev_delay_update_latency", 00:05:40.171 "bdev_zone_block_delete", 00:05:40.171 "bdev_zone_block_create", 00:05:40.171 "blobfs_create", 00:05:40.171 "blobfs_detect", 00:05:40.171 "blobfs_set_cache_size", 00:05:40.171 "bdev_aio_delete", 00:05:40.171 "bdev_aio_rescan", 00:05:40.171 "bdev_aio_create", 00:05:40.171 "bdev_ftl_set_property", 00:05:40.171 "bdev_ftl_get_properties", 00:05:40.171 "bdev_ftl_get_stats", 00:05:40.171 "bdev_ftl_unmap", 00:05:40.171 "bdev_ftl_unload", 00:05:40.171 "bdev_ftl_delete", 00:05:40.171 "bdev_ftl_load", 00:05:40.171 "bdev_ftl_create", 00:05:40.171 "bdev_virtio_attach_controller", 00:05:40.171 "bdev_virtio_scsi_get_devices", 00:05:40.171 "bdev_virtio_detach_controller", 00:05:40.171 "bdev_virtio_blk_set_hotplug", 00:05:40.171 "bdev_iscsi_delete", 00:05:40.171 "bdev_iscsi_create", 00:05:40.171 "bdev_iscsi_set_options", 00:05:40.171 "bdev_uring_delete", 00:05:40.171 "bdev_uring_rescan", 00:05:40.171 "bdev_uring_create", 00:05:40.171 "accel_error_inject_error", 00:05:40.171 "ioat_scan_accel_module", 00:05:40.171 "dsa_scan_accel_module", 00:05:40.171 "iaa_scan_accel_module", 00:05:40.171 "vfu_virtio_create_scsi_endpoint", 00:05:40.171 "vfu_virtio_scsi_remove_target", 00:05:40.171 "vfu_virtio_scsi_add_target", 00:05:40.171 "vfu_virtio_create_blk_endpoint", 00:05:40.171 "vfu_virtio_delete_endpoint", 00:05:40.171 "keyring_file_remove_key", 00:05:40.171 "keyring_file_add_key", 00:05:40.171 "keyring_linux_set_options", 00:05:40.171 "iscsi_get_histogram", 00:05:40.171 "iscsi_enable_histogram", 00:05:40.171 "iscsi_set_options", 00:05:40.171 "iscsi_get_auth_groups", 00:05:40.171 "iscsi_auth_group_remove_secret", 00:05:40.171 "iscsi_auth_group_add_secret", 00:05:40.171 "iscsi_delete_auth_group", 00:05:40.171 "iscsi_create_auth_group", 00:05:40.171 "iscsi_set_discovery_auth", 00:05:40.171 "iscsi_get_options", 00:05:40.171 "iscsi_target_node_request_logout", 00:05:40.171 "iscsi_target_node_set_redirect", 00:05:40.171 "iscsi_target_node_set_auth", 00:05:40.171 "iscsi_target_node_add_lun", 00:05:40.171 "iscsi_get_stats", 00:05:40.171 "iscsi_get_connections", 00:05:40.171 "iscsi_portal_group_set_auth", 00:05:40.171 "iscsi_start_portal_group", 00:05:40.171 "iscsi_delete_portal_group", 00:05:40.171 "iscsi_create_portal_group", 00:05:40.171 "iscsi_get_portal_groups", 00:05:40.171 "iscsi_delete_target_node", 00:05:40.171 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.171 "iscsi_target_node_add_pg_ig_maps", 00:05:40.171 "iscsi_create_target_node", 00:05:40.171 "iscsi_get_target_nodes", 00:05:40.171 "iscsi_delete_initiator_group", 00:05:40.171 "iscsi_initiator_group_remove_initiators", 00:05:40.171 "iscsi_initiator_group_add_initiators", 00:05:40.171 "iscsi_create_initiator_group", 00:05:40.171 "iscsi_get_initiator_groups", 00:05:40.171 "nvmf_set_crdt", 00:05:40.171 "nvmf_set_config", 00:05:40.171 "nvmf_set_max_subsystems", 00:05:40.171 "nvmf_stop_mdns_prr", 00:05:40.171 "nvmf_publish_mdns_prr", 00:05:40.171 "nvmf_subsystem_get_listeners", 00:05:40.171 "nvmf_subsystem_get_qpairs", 00:05:40.171 "nvmf_subsystem_get_controllers", 00:05:40.171 "nvmf_get_stats", 00:05:40.171 "nvmf_get_transports", 00:05:40.171 "nvmf_create_transport", 00:05:40.171 "nvmf_get_targets", 00:05:40.171 "nvmf_delete_target", 00:05:40.171 "nvmf_create_target", 00:05:40.171 "nvmf_subsystem_allow_any_host", 00:05:40.171 "nvmf_subsystem_remove_host", 00:05:40.171 "nvmf_subsystem_add_host", 00:05:40.171 "nvmf_ns_remove_host", 00:05:40.171 "nvmf_ns_add_host", 00:05:40.171 "nvmf_subsystem_remove_ns", 00:05:40.171 "nvmf_subsystem_add_ns", 00:05:40.171 "nvmf_subsystem_listener_set_ana_state", 00:05:40.171 "nvmf_discovery_get_referrals", 00:05:40.171 "nvmf_discovery_remove_referral", 00:05:40.171 "nvmf_discovery_add_referral", 00:05:40.171 "nvmf_subsystem_remove_listener", 00:05:40.171 "nvmf_subsystem_add_listener", 00:05:40.171 "nvmf_delete_subsystem", 00:05:40.171 "nvmf_create_subsystem", 00:05:40.171 "nvmf_get_subsystems", 00:05:40.171 "env_dpdk_get_mem_stats", 00:05:40.171 "nbd_get_disks", 00:05:40.171 "nbd_stop_disk", 00:05:40.171 "nbd_start_disk", 00:05:40.171 "ublk_recover_disk", 00:05:40.171 "ublk_get_disks", 00:05:40.171 "ublk_stop_disk", 00:05:40.171 "ublk_start_disk", 00:05:40.171 "ublk_destroy_target", 00:05:40.171 "ublk_create_target", 00:05:40.171 "virtio_blk_create_transport", 00:05:40.171 "virtio_blk_get_transports", 00:05:40.171 "vhost_controller_set_coalescing", 00:05:40.171 "vhost_get_controllers", 00:05:40.171 "vhost_delete_controller", 00:05:40.171 "vhost_create_blk_controller", 00:05:40.171 "vhost_scsi_controller_remove_target", 00:05:40.171 "vhost_scsi_controller_add_target", 00:05:40.171 "vhost_start_scsi_controller", 00:05:40.171 "vhost_create_scsi_controller", 00:05:40.171 "thread_set_cpumask", 00:05:40.171 "framework_get_governor", 00:05:40.171 "framework_get_scheduler", 00:05:40.171 "framework_set_scheduler", 00:05:40.171 "framework_get_reactors", 00:05:40.171 "thread_get_io_channels", 00:05:40.171 "thread_get_pollers", 00:05:40.171 "thread_get_stats", 00:05:40.171 "framework_monitor_context_switch", 00:05:40.171 "spdk_kill_instance", 00:05:40.171 "log_enable_timestamps", 00:05:40.171 "log_get_flags", 00:05:40.171 "log_clear_flag", 00:05:40.171 "log_set_flag", 00:05:40.171 "log_get_level", 00:05:40.171 "log_set_level", 00:05:40.171 "log_get_print_level", 00:05:40.171 "log_set_print_level", 00:05:40.171 "framework_enable_cpumask_locks", 00:05:40.171 "framework_disable_cpumask_locks", 00:05:40.171 "framework_wait_init", 00:05:40.171 "framework_start_init", 00:05:40.171 "scsi_get_devices", 00:05:40.171 "bdev_get_histogram", 00:05:40.171 "bdev_enable_histogram", 00:05:40.171 "bdev_set_qos_limit", 00:05:40.171 "bdev_set_qd_sampling_period", 00:05:40.171 "bdev_get_bdevs", 00:05:40.171 "bdev_reset_iostat", 00:05:40.171 "bdev_get_iostat", 00:05:40.171 "bdev_examine", 00:05:40.171 "bdev_wait_for_examine", 00:05:40.171 "bdev_set_options", 00:05:40.171 "notify_get_notifications", 00:05:40.171 "notify_get_types", 00:05:40.171 "accel_get_stats", 00:05:40.171 "accel_set_options", 00:05:40.171 "accel_set_driver", 00:05:40.171 "accel_crypto_key_destroy", 00:05:40.171 "accel_crypto_keys_get", 00:05:40.171 "accel_crypto_key_create", 00:05:40.171 "accel_assign_opc", 00:05:40.171 "accel_get_module_info", 00:05:40.171 "accel_get_opc_assignments", 00:05:40.171 "vmd_rescan", 00:05:40.171 "vmd_remove_device", 00:05:40.171 "vmd_enable", 00:05:40.171 "sock_get_default_impl", 00:05:40.171 "sock_set_default_impl", 00:05:40.171 "sock_impl_set_options", 00:05:40.171 "sock_impl_get_options", 00:05:40.171 "iobuf_get_stats", 00:05:40.171 "iobuf_set_options", 00:05:40.171 "keyring_get_keys", 00:05:40.171 "framework_get_pci_devices", 00:05:40.171 "framework_get_config", 00:05:40.171 "framework_get_subsystems", 00:05:40.171 "vfu_tgt_set_base_path", 00:05:40.171 "trace_get_info", 00:05:40.171 "trace_get_tpoint_group_mask", 00:05:40.171 "trace_disable_tpoint_group", 00:05:40.171 "trace_enable_tpoint_group", 00:05:40.171 "trace_clear_tpoint_mask", 00:05:40.171 "trace_set_tpoint_mask", 00:05:40.171 "spdk_get_version", 00:05:40.171 "rpc_get_methods" 00:05:40.171 ] 00:05:40.172 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.172 02:54:46 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.172 02:54:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.429 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.429 02:54:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60637 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60637 ']' 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60637 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60637 00:05:40.429 killing process with pid 60637 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60637' 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60637 00:05:40.429 02:54:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60637 00:05:42.332 ************************************ 00:05:42.332 END TEST spdkcli_tcp 00:05:42.332 ************************************ 00:05:42.332 00:05:42.332 real 0m3.263s 00:05:42.332 user 0m5.866s 00:05:42.332 sys 0m0.463s 00:05:42.332 02:54:48 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.332 02:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.332 02:54:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.332 02:54:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.332 02:54:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.332 02:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.332 02:54:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.332 ************************************ 00:05:42.332 START TEST dpdk_mem_utility 00:05:42.332 ************************************ 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.332 * Looking for test storage... 00:05:42.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:42.332 02:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.332 02:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60745 00:05:42.332 02:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.332 02:54:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60745 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60745 ']' 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.332 02:54:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.332 [2024-07-13 02:54:48.728388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.332 [2024-07-13 02:54:48.728517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:05:42.591 [2024-07-13 02:54:48.887492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.591 [2024-07-13 02:54:49.051068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.850 [2024-07-13 02:54:49.209206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.417 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.417 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:43.417 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:43.417 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:43.417 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.417 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.417 { 00:05:43.417 "filename": "/tmp/spdk_mem_dump.txt" 00:05:43.417 } 00:05:43.417 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.417 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.417 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:43.417 1 heaps totaling size 820.000000 MiB 00:05:43.417 size: 820.000000 MiB heap id: 0 00:05:43.417 end heaps---------- 00:05:43.417 8 mempools totaling size 598.116089 MiB 00:05:43.417 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:43.417 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:43.417 size: 84.521057 MiB name: bdev_io_60745 00:05:43.417 size: 51.011292 MiB name: evtpool_60745 00:05:43.417 size: 50.003479 MiB name: msgpool_60745 00:05:43.417 size: 21.763794 MiB name: PDU_Pool 00:05:43.417 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:43.417 size: 0.026123 MiB name: Session_Pool 00:05:43.417 end mempools------- 00:05:43.417 6 memzones totaling size 4.142822 MiB 00:05:43.417 size: 1.000366 MiB name: RG_ring_0_60745 00:05:43.417 size: 1.000366 MiB name: RG_ring_1_60745 00:05:43.417 size: 1.000366 MiB name: RG_ring_4_60745 00:05:43.417 size: 1.000366 MiB name: RG_ring_5_60745 00:05:43.417 size: 0.125366 MiB name: RG_ring_2_60745 00:05:43.417 size: 0.015991 MiB name: RG_ring_3_60745 00:05:43.417 end memzones------- 00:05:43.417 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:43.417 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:05:43.417 list of free elements. size: 18.452271 MiB 00:05:43.417 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:43.417 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:43.417 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:43.417 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:43.417 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:43.417 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:43.417 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:43.417 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:43.417 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:43.417 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:43.417 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:43.417 element at address: 0x200000200000 with size: 0.830200 MiB 00:05:43.417 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:43.417 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:43.417 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:43.417 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:43.417 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:43.417 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:43.417 list of standard malloc elements. size: 199.283325 MiB 00:05:43.417 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:43.417 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:43.417 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:43.417 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:43.417 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:43.417 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:43.417 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:43.417 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:43.417 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:43.417 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:43.417 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:43.417 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:43.418 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:43.419 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:43.419 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:43.419 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:43.419 list of memzone associated elements. size: 602.264404 MiB 00:05:43.419 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:43.419 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:43.419 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:43.419 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:43.419 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:43.419 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60745_0 00:05:43.419 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:43.419 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60745_0 00:05:43.419 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:43.419 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60745_0 00:05:43.419 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:43.419 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:43.419 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:43.419 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:43.419 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:43.419 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60745 00:05:43.419 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:43.419 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60745 00:05:43.419 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:43.419 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60745 00:05:43.419 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:43.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:43.419 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:43.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:43.419 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:43.419 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:43.419 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:43.419 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:43.419 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:43.419 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60745 00:05:43.419 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:43.419 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60745 00:05:43.419 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:43.419 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60745 00:05:43.419 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:43.419 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60745 00:05:43.419 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:43.419 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60745 00:05:43.420 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:43.420 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:43.420 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:43.420 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:43.420 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:43.420 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:43.420 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:43.420 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60745 00:05:43.420 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:43.420 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:43.420 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:43.420 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:43.420 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:43.420 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60745 00:05:43.420 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:43.420 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:43.420 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:43.420 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60745 00:05:43.420 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:43.420 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60745 00:05:43.420 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:43.420 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:43.420 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:43.420 02:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60745 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60745 ']' 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60745 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60745 00:05:43.420 killing process with pid 60745 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60745' 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60745 00:05:43.420 02:54:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60745 00:05:45.321 ************************************ 00:05:45.321 END TEST dpdk_mem_utility 00:05:45.321 ************************************ 00:05:45.321 00:05:45.321 real 0m3.086s 00:05:45.321 user 0m3.239s 00:05:45.321 sys 0m0.418s 00:05:45.321 02:54:51 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.321 02:54:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.321 02:54:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.321 02:54:51 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.321 02:54:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.321 02:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.321 02:54:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.321 ************************************ 00:05:45.321 START TEST event 00:05:45.321 ************************************ 00:05:45.321 02:54:51 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.321 * Looking for test storage... 00:05:45.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.321 02:54:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.321 02:54:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.321 02:54:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.321 02:54:51 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:45.321 02:54:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.321 02:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.321 ************************************ 00:05:45.321 START TEST event_perf 00:05:45.321 ************************************ 00:05:45.321 02:54:51 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.580 Running I/O for 1 seconds...[2024-07-13 02:54:51.830865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.580 [2024-07-13 02:54:51.831212] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60835 ] 00:05:45.580 [2024-07-13 02:54:52.003299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.839 [2024-07-13 02:54:52.155950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.839 [2024-07-13 02:54:52.156065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.839 [2024-07-13 02:54:52.156205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.839 Running I/O for 1 seconds...[2024-07-13 02:54:52.156220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.213 00:05:47.213 lcore 0: 190004 00:05:47.213 lcore 1: 190003 00:05:47.213 lcore 2: 190004 00:05:47.213 lcore 3: 190004 00:05:47.213 done. 00:05:47.213 00:05:47.213 real 0m1.702s 00:05:47.213 user 0m4.457s 00:05:47.213 sys 0m0.116s 00:05:47.213 02:54:53 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.213 02:54:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.213 ************************************ 00:05:47.213 END TEST event_perf 00:05:47.213 ************************************ 00:05:47.213 02:54:53 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.213 02:54:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.213 02:54:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:47.213 02:54:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.213 02:54:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.213 ************************************ 00:05:47.213 START TEST event_reactor 00:05:47.213 ************************************ 00:05:47.213 02:54:53 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.213 [2024-07-13 02:54:53.585042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:47.213 [2024-07-13 02:54:53.585212] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60874 ] 00:05:47.471 [2024-07-13 02:54:53.754409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.471 [2024-07-13 02:54:53.908865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.848 test_start 00:05:48.848 oneshot 00:05:48.848 tick 100 00:05:48.848 tick 100 00:05:48.848 tick 250 00:05:48.848 tick 100 00:05:48.848 tick 100 00:05:48.848 tick 100 00:05:48.848 tick 250 00:05:48.848 tick 500 00:05:48.848 tick 100 00:05:48.848 tick 100 00:05:48.848 tick 250 00:05:48.848 tick 100 00:05:48.848 tick 100 00:05:48.848 test_end 00:05:48.848 00:05:48.848 real 0m1.689s 00:05:48.848 user 0m1.470s 00:05:48.848 sys 0m0.109s 00:05:48.848 02:54:55 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.848 ************************************ 00:05:48.848 END TEST event_reactor 00:05:48.848 ************************************ 00:05:48.848 02:54:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:48.848 02:54:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.848 02:54:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.848 02:54:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:48.848 02:54:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.848 02:54:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.848 ************************************ 00:05:48.848 START TEST event_reactor_perf 00:05:48.848 ************************************ 00:05:48.848 02:54:55 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:48.848 [2024-07-13 02:54:55.321622] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:48.848 [2024-07-13 02:54:55.321795] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:05:49.107 [2024-07-13 02:54:55.488872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.366 [2024-07-13 02:54:55.651221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.743 test_start 00:05:50.743 test_end 00:05:50.743 Performance: 342719 events per second 00:05:50.743 ************************************ 00:05:50.743 END TEST event_reactor_perf 00:05:50.743 ************************************ 00:05:50.743 00:05:50.743 real 0m1.695s 00:05:50.743 user 0m1.495s 00:05:50.743 sys 0m0.092s 00:05:50.743 02:54:56 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.743 02:54:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 02:54:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.743 02:54:57 event -- event/event.sh@49 -- # uname -s 00:05:50.743 02:54:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.744 02:54:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.744 02:54:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.744 02:54:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.744 02:54:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.744 ************************************ 00:05:50.744 START TEST event_scheduler 00:05:50.744 ************************************ 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.744 * Looking for test storage... 00:05:50.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:50.744 02:54:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.744 02:54:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60979 00:05:50.744 02:54:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.744 02:54:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.744 02:54:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60979 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60979 ']' 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.744 02:54:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.744 [2024-07-13 02:54:57.219783] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.744 [2024-07-13 02:54:57.220474] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:05:51.003 [2024-07-13 02:54:57.394872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.262 [2024-07-13 02:54:57.576217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.262 [2024-07-13 02:54:57.576292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.262 [2024-07-13 02:54:57.576412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.262 [2024-07-13 02:54:57.576429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:51.829 02:54:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.829 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.829 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.829 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.829 POWER: Cannot set governor of lcore 0 to performance 00:05:51.829 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.829 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.829 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.829 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.829 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:51.829 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:51.829 POWER: Unable to set Power Management Environment for lcore 0 00:05:51.829 [2024-07-13 02:54:58.170914] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:51.829 [2024-07-13 02:54:58.170953] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:51.829 [2024-07-13 02:54:58.170970] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.829 [2024-07-13 02:54:58.171002] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.829 [2024-07-13 02:54:58.171030] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.829 [2024-07-13 02:54:58.171041] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.829 02:54:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.829 02:54:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 [2024-07-13 02:54:58.323194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.089 [2024-07-13 02:54:58.397097] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.089 02:54:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.089 02:54:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.089 02:54:58 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.089 02:54:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.089 02:54:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 ************************************ 00:05:52.089 START TEST scheduler_create_thread 00:05:52.089 ************************************ 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 2 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 3 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 4 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 5 00:05:52.089 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 6 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 7 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 8 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 9 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 10 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 02:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.994 02:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.994 02:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.994 02:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.994 02:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.994 02:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.561 ************************************ 00:05:54.561 END TEST scheduler_create_thread 00:05:54.561 ************************************ 00:05:54.561 02:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.561 00:05:54.561 real 0m2.620s 00:05:54.561 user 0m0.022s 00:05:54.561 sys 0m0.003s 00:05:54.561 02:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.561 02:55:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:54.819 02:55:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.819 02:55:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60979 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60979 ']' 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60979 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60979 00:05:54.819 killing process with pid 60979 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60979' 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60979 00:05:54.819 02:55:01 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60979 00:05:55.096 [2024-07-13 02:55:01.511191] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.035 ************************************ 00:05:56.035 END TEST event_scheduler 00:05:56.035 ************************************ 00:05:56.035 00:05:56.035 real 0m5.456s 00:05:56.035 user 0m9.543s 00:05:56.035 sys 0m0.397s 00:05:56.035 02:55:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.035 02:55:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 02:55:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.293 02:55:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.293 02:55:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.293 02:55:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.293 02:55:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.293 02:55:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 ************************************ 00:05:56.294 START TEST app_repeat 00:05:56.294 ************************************ 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61085 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.294 Process app_repeat pid: 61085 00:05:56.294 spdk_app_start Round 0 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61085' 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.294 02:55:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:05:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61085 ']' 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.294 02:55:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.294 [2024-07-13 02:55:02.609117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:56.294 [2024-07-13 02:55:02.609285] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:05:56.294 [2024-07-13 02:55:02.770700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.553 [2024-07-13 02:55:02.932858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.553 [2024-07-13 02:55:02.932861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.812 [2024-07-13 02:55:03.085493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.074 02:55:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.074 02:55:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.074 02:55:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.675 Malloc0 00:05:57.675 02:55:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.675 Malloc1 00:05:57.934 02:55:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.934 02:55:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.935 /dev/nbd0 00:05:57.935 02:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.935 02:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.935 1+0 records in 00:05:57.935 1+0 records out 00:05:57.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607878 s, 6.7 MB/s 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:57.935 02:55:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:57.935 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.935 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.935 02:55:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.193 /dev/nbd1 00:05:58.452 02:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.452 02:55:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.452 1+0 records in 00:05:58.452 1+0 records out 00:05:58.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285706 s, 14.3 MB/s 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.452 02:55:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.453 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.453 02:55:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.453 02:55:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.453 02:55:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.453 02:55:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.711 02:55:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.711 { 00:05:58.711 "nbd_device": "/dev/nbd0", 00:05:58.711 "bdev_name": "Malloc0" 00:05:58.711 }, 00:05:58.711 { 00:05:58.711 "nbd_device": "/dev/nbd1", 00:05:58.711 "bdev_name": "Malloc1" 00:05:58.711 } 00:05:58.711 ]' 00:05:58.711 02:55:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.711 { 00:05:58.711 "nbd_device": "/dev/nbd0", 00:05:58.711 "bdev_name": "Malloc0" 00:05:58.711 }, 00:05:58.711 { 00:05:58.711 "nbd_device": "/dev/nbd1", 00:05:58.711 "bdev_name": "Malloc1" 00:05:58.711 } 00:05:58.711 ]' 00:05:58.711 02:55:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.711 /dev/nbd1' 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.711 /dev/nbd1' 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.711 02:55:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.712 256+0 records in 00:05:58.712 256+0 records out 00:05:58.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661699 s, 158 MB/s 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.712 256+0 records in 00:05:58.712 256+0 records out 00:05:58.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278227 s, 37.7 MB/s 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.712 256+0 records in 00:05:58.712 256+0 records out 00:05:58.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363438 s, 28.9 MB/s 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.712 02:55:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.971 02:55:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.229 02:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.488 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.747 02:55:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.747 02:55:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.005 02:55:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.942 [2024-07-13 02:55:07.314048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.201 [2024-07-13 02:55:07.451963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.201 [2024-07-13 02:55:07.451962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.201 [2024-07-13 02:55:07.607688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.201 [2024-07-13 02:55:07.607799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.201 [2024-07-13 02:55:07.607839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.103 spdk_app_start Round 1 00:06:03.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.103 02:55:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.103 02:55:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.103 02:55:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61085 ']' 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.103 02:55:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.362 02:55:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.362 02:55:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:03.362 02:55:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.362 Malloc0 00:06:03.362 02:55:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.620 Malloc1 00:06:03.620 02:55:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.620 02:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.879 /dev/nbd0 00:06:03.879 02:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.138 02:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.138 1+0 records in 00:06:04.138 1+0 records out 00:06:04.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426052 s, 9.6 MB/s 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.138 02:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.138 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.138 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.138 02:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.397 /dev/nbd1 00:06:04.397 02:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.397 02:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.397 02:55:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.397 02:55:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.397 02:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.397 02:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.398 1+0 records in 00:06:04.398 1+0 records out 00:06:04.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620953 s, 6.6 MB/s 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.398 02:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.398 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.398 02:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.398 02:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.398 02:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.398 02:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.657 02:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.657 { 00:06:04.657 "nbd_device": "/dev/nbd0", 00:06:04.657 "bdev_name": "Malloc0" 00:06:04.657 }, 00:06:04.657 { 00:06:04.657 "nbd_device": "/dev/nbd1", 00:06:04.657 "bdev_name": "Malloc1" 00:06:04.657 } 00:06:04.657 ]' 00:06:04.657 02:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.657 02:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.657 { 00:06:04.657 "nbd_device": "/dev/nbd0", 00:06:04.657 "bdev_name": "Malloc0" 00:06:04.657 }, 00:06:04.657 { 00:06:04.657 "nbd_device": "/dev/nbd1", 00:06:04.657 "bdev_name": "Malloc1" 00:06:04.657 } 00:06:04.657 ]' 00:06:04.657 02:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.657 /dev/nbd1' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.657 /dev/nbd1' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.657 256+0 records in 00:06:04.657 256+0 records out 00:06:04.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588269 s, 178 MB/s 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.657 256+0 records in 00:06:04.657 256+0 records out 00:06:04.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313381 s, 33.5 MB/s 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.657 256+0 records in 00:06:04.657 256+0 records out 00:06:04.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309881 s, 33.8 MB/s 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.657 02:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.916 02:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.176 02:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.435 02:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.435 02:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.435 02:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.693 02:55:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.693 02:55:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.952 02:55:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.889 [2024-07-13 02:55:13.370863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.148 [2024-07-13 02:55:13.524778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.148 [2024-07-13 02:55:13.524786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.407 [2024-07-13 02:55:13.667100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.407 [2024-07-13 02:55:13.667257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.407 [2024-07-13 02:55:13.667277] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.353 spdk_app_start Round 2 00:06:09.353 02:55:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.353 02:55:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:09.353 02:55:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61085 ']' 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.353 02:55:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.353 02:55:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.612 Malloc0 00:06:09.612 02:55:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.870 Malloc1 00:06:09.870 02:55:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.870 02:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.129 /dev/nbd0 00:06:10.129 02:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.129 02:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.129 1+0 records in 00:06:10.129 1+0 records out 00:06:10.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288635 s, 14.2 MB/s 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.129 02:55:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.129 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.129 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.129 02:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.388 /dev/nbd1 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.388 1+0 records in 00:06:10.388 1+0 records out 00:06:10.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460447 s, 8.9 MB/s 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.388 02:55:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.388 02:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.647 02:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.647 { 00:06:10.647 "nbd_device": "/dev/nbd0", 00:06:10.647 "bdev_name": "Malloc0" 00:06:10.647 }, 00:06:10.647 { 00:06:10.647 "nbd_device": "/dev/nbd1", 00:06:10.647 "bdev_name": "Malloc1" 00:06:10.647 } 00:06:10.647 ]' 00:06:10.647 02:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.647 02:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.647 { 00:06:10.647 "nbd_device": "/dev/nbd0", 00:06:10.647 "bdev_name": "Malloc0" 00:06:10.647 }, 00:06:10.647 { 00:06:10.647 "nbd_device": "/dev/nbd1", 00:06:10.647 "bdev_name": "Malloc1" 00:06:10.647 } 00:06:10.647 ]' 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.647 /dev/nbd1' 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.647 /dev/nbd1' 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.647 256+0 records in 00:06:10.647 256+0 records out 00:06:10.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578602 s, 181 MB/s 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.647 256+0 records in 00:06:10.647 256+0 records out 00:06:10.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240764 s, 43.6 MB/s 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.647 256+0 records in 00:06:10.647 256+0 records out 00:06:10.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296815 s, 35.3 MB/s 00:06:10.647 02:55:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.648 02:55:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.906 02:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.165 02:55:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.424 02:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.683 02:55:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.683 02:55:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.941 02:55:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.318 [2024-07-13 02:55:19.407894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.318 [2024-07-13 02:55:19.554272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.318 [2024-07-13 02:55:19.554272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.318 [2024-07-13 02:55:19.706578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.318 [2024-07-13 02:55:19.706697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.318 [2024-07-13 02:55:19.706719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.220 02:55:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61085 ']' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.220 02:55:21 event.app_repeat -- event/event.sh@39 -- # killprocess 61085 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61085 ']' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61085 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61085 00:06:15.220 killing process with pid 61085 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61085' 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61085 00:06:15.220 02:55:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61085 00:06:16.154 spdk_app_start is called in Round 0. 00:06:16.154 Shutdown signal received, stop current app iteration 00:06:16.154 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:16.154 spdk_app_start is called in Round 1. 00:06:16.154 Shutdown signal received, stop current app iteration 00:06:16.154 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:16.154 spdk_app_start is called in Round 2. 00:06:16.154 Shutdown signal received, stop current app iteration 00:06:16.154 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:16.154 spdk_app_start is called in Round 3. 00:06:16.154 Shutdown signal received, stop current app iteration 00:06:16.154 02:55:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:16.154 02:55:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:16.154 00:06:16.154 real 0m20.026s 00:06:16.154 user 0m43.387s 00:06:16.154 sys 0m2.634s 00:06:16.154 02:55:22 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.154 ************************************ 00:06:16.154 END TEST app_repeat 00:06:16.154 ************************************ 00:06:16.154 02:55:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.154 02:55:22 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.154 02:55:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:16.154 02:55:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:16.154 02:55:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.154 02:55:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.154 02:55:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.154 ************************************ 00:06:16.154 START TEST cpu_locks 00:06:16.154 ************************************ 00:06:16.154 02:55:22 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:16.412 * Looking for test storage... 00:06:16.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.412 02:55:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:16.412 02:55:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:16.412 02:55:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:16.412 02:55:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:16.412 02:55:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.412 02:55:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.412 02:55:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.412 ************************************ 00:06:16.412 START TEST default_locks 00:06:16.412 ************************************ 00:06:16.412 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:16.412 02:55:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61530 00:06:16.412 02:55:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61530 00:06:16.412 02:55:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.412 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61530 ']' 00:06:16.413 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.413 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.413 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.413 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.413 02:55:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.413 [2024-07-13 02:55:22.846367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.413 [2024-07-13 02:55:22.847164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61530 ] 00:06:16.670 [2024-07-13 02:55:23.016386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.933 [2024-07-13 02:55:23.174239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.933 [2024-07-13 02:55:23.320476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.500 02:55:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.500 02:55:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:17.500 02:55:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61530 00:06:17.500 02:55:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.500 02:55:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61530 ']' 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.757 killing process with pid 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61530' 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61530 00:06:17.757 02:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61530 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61530 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61530 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61530 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61530 ']' 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.657 ERROR: process (pid: 61530) is no longer running 00:06:19.657 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61530) - No such process 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.657 00:06:19.657 real 0m3.145s 00:06:19.657 user 0m3.230s 00:06:19.657 sys 0m0.502s 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.657 02:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.657 ************************************ 00:06:19.657 END TEST default_locks 00:06:19.657 ************************************ 00:06:19.657 02:55:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.657 02:55:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.657 02:55:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.657 02:55:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.657 02:55:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.657 ************************************ 00:06:19.657 START TEST default_locks_via_rpc 00:06:19.657 ************************************ 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61599 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61599 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61599 ']' 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.658 02:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.658 [2024-07-13 02:55:26.034634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.658 [2024-07-13 02:55:26.034809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:06:19.915 [2024-07-13 02:55:26.203545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.915 [2024-07-13 02:55:26.355965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.173 [2024-07-13 02:55:26.520188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.738 02:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.738 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.738 02:55:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61599 00:06:20.738 02:55:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61599 00:06:20.738 02:55:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61599 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61599 ']' 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61599 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61599 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.996 killing process with pid 61599 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61599' 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61599 00:06:20.996 02:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61599 00:06:22.896 00:06:22.896 real 0m3.348s 00:06:22.896 user 0m3.453s 00:06:22.896 sys 0m0.584s 00:06:22.896 02:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.896 02:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 ************************************ 00:06:22.896 END TEST default_locks_via_rpc 00:06:22.896 ************************************ 00:06:22.896 02:55:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.896 02:55:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:22.896 02:55:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.896 02:55:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.896 02:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.896 ************************************ 00:06:22.896 START TEST non_locking_app_on_locked_coremask 00:06:22.896 ************************************ 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61662 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61662 /var/tmp/spdk.sock 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61662 ']' 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.896 02:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.153 [2024-07-13 02:55:29.410283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.153 [2024-07-13 02:55:29.410467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:06:23.153 [2024-07-13 02:55:29.572856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.411 [2024-07-13 02:55:29.792176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.669 [2024-07-13 02:55:29.953109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61678 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61678 /var/tmp/spdk2.sock 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61678 ']' 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.236 02:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.236 [2024-07-13 02:55:30.589585] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.236 [2024-07-13 02:55:30.589771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:06:24.494 [2024-07-13 02:55:30.768222] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.494 [2024-07-13 02:55:30.768292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.753 [2024-07-13 02:55:31.098758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.012 [2024-07-13 02:55:31.442213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.960 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.960 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.960 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61662 00:06:25.960 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61662 00:06:25.960 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61662 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61662 ']' 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61662 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61662 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61662' 00:06:26.527 killing process with pid 61662 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61662 00:06:26.527 02:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61662 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61678 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61678 ']' 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61678 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61678 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.713 killing process with pid 61678 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61678' 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61678 00:06:30.713 02:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61678 00:06:32.090 00:06:32.090 real 0m9.244s 00:06:32.090 user 0m9.709s 00:06:32.090 sys 0m1.005s 00:06:32.090 02:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.090 02:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.090 ************************************ 00:06:32.090 END TEST non_locking_app_on_locked_coremask 00:06:32.090 ************************************ 00:06:32.349 02:55:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.349 02:55:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.349 02:55:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.349 02:55:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.349 02:55:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.349 ************************************ 00:06:32.349 START TEST locking_app_on_unlocked_coremask 00:06:32.349 ************************************ 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61802 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61802 /var/tmp/spdk.sock 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61802 ']' 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.349 02:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.349 [2024-07-13 02:55:38.733843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:32.349 [2024-07-13 02:55:38.734065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:06:32.607 [2024-07-13 02:55:38.900738] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.607 [2024-07-13 02:55:38.900809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.607 [2024-07-13 02:55:39.059623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.866 [2024-07-13 02:55:39.213765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61826 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61826 /var/tmp/spdk2.sock 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61826 ']' 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.433 02:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.433 [2024-07-13 02:55:39.800679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.433 [2024-07-13 02:55:39.800832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:06:33.691 [2024-07-13 02:55:39.967860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.950 [2024-07-13 02:55:40.291694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.208 [2024-07-13 02:55:40.621548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.142 02:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.142 02:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:35.142 02:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61826 00:06:35.142 02:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.142 02:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61826 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61802 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61802 ']' 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61802 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61802 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.076 killing process with pid 61802 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61802' 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61802 00:06:36.076 02:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61802 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61826 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61826 ']' 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61826 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61826 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.261 killing process with pid 61826 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61826' 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61826 00:06:40.261 02:55:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61826 00:06:41.637 00:06:41.637 real 0m9.291s 00:06:41.637 user 0m9.787s 00:06:41.637 sys 0m1.164s 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 ************************************ 00:06:41.638 END TEST locking_app_on_unlocked_coremask 00:06:41.638 ************************************ 00:06:41.638 02:55:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.638 02:55:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.638 02:55:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.638 02:55:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.638 02:55:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 ************************************ 00:06:41.638 START TEST locking_app_on_locked_coremask 00:06:41.638 ************************************ 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61949 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61949 /var/tmp/spdk.sock 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61949 ']' 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.638 02:55:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 [2024-07-13 02:55:48.081199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:41.638 [2024-07-13 02:55:48.081417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61949 ] 00:06:41.901 [2024-07-13 02:55:48.251554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.163 [2024-07-13 02:55:48.404493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.163 [2024-07-13 02:55:48.558919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61968 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61968 /var/tmp/spdk2.sock 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61968 /var/tmp/spdk2.sock 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61968 /var/tmp/spdk2.sock 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61968 ']' 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.730 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.730 [2024-07-13 02:55:49.151834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:42.730 [2024-07-13 02:55:49.152056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:06:42.989 [2024-07-13 02:55:49.327789] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61949 has claimed it. 00:06:42.989 [2024-07-13 02:55:49.327876] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.557 ERROR: process (pid: 61968) is no longer running 00:06:43.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61968) - No such process 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61949 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61949 00:06:43.557 02:55:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61949 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61949 ']' 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61949 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61949 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.816 killing process with pid 61949 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61949' 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61949 00:06:43.816 02:55:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61949 00:06:45.720 00:06:45.720 real 0m4.100s 00:06:45.720 user 0m4.490s 00:06:45.720 sys 0m0.744s 00:06:45.720 02:55:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.720 02:55:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.720 ************************************ 00:06:45.720 END TEST locking_app_on_locked_coremask 00:06:45.720 ************************************ 00:06:45.721 02:55:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.721 02:55:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.721 02:55:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.721 02:55:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.721 02:55:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.721 ************************************ 00:06:45.721 START TEST locking_overlapped_coremask 00:06:45.721 ************************************ 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62028 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62028 /var/tmp/spdk.sock 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62028 ']' 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.721 02:55:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.979 [2024-07-13 02:55:52.227593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.979 [2024-07-13 02:55:52.227751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62028 ] 00:06:45.979 [2024-07-13 02:55:52.395589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.237 [2024-07-13 02:55:52.555531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.237 [2024-07-13 02:55:52.555656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.237 [2024-07-13 02:55:52.555668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.237 [2024-07-13 02:55:52.715779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62050 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62050 /var/tmp/spdk2.sock 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62050 /var/tmp/spdk2.sock 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62050 /var/tmp/spdk2.sock 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62050 ']' 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.803 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.060 [2024-07-13 02:55:53.318661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.060 [2024-07-13 02:55:53.318834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62050 ] 00:06:47.060 [2024-07-13 02:55:53.495425] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62028 has claimed it. 00:06:47.060 [2024-07-13 02:55:53.495531] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.626 ERROR: process (pid: 62050) is no longer running 00:06:47.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62050) - No such process 00:06:47.626 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.626 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:47.626 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62028 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62028 ']' 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62028 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62028 00:06:47.627 killing process with pid 62028 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62028' 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62028 00:06:47.627 02:55:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62028 00:06:49.552 ************************************ 00:06:49.552 END TEST locking_overlapped_coremask 00:06:49.552 ************************************ 00:06:49.552 00:06:49.552 real 0m3.719s 00:06:49.552 user 0m9.786s 00:06:49.552 sys 0m0.504s 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.552 02:55:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.552 02:55:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.552 02:55:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.552 02:55:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.552 02:55:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.552 ************************************ 00:06:49.552 START TEST locking_overlapped_coremask_via_rpc 00:06:49.552 ************************************ 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62105 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62105 /var/tmp/spdk.sock 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62105 ']' 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.552 02:55:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.552 [2024-07-13 02:55:55.974775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:49.552 [2024-07-13 02:55:55.974974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:06:49.810 [2024-07-13 02:55:56.130021] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.810 [2024-07-13 02:55:56.130124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.810 [2024-07-13 02:55:56.290316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.810 [2024-07-13 02:55:56.290451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.810 [2024-07-13 02:55:56.290456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.069 [2024-07-13 02:55:56.462123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62123 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62123 /var/tmp/spdk2.sock 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62123 ']' 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.635 02:55:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.635 [2024-07-13 02:55:57.027484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.635 [2024-07-13 02:55:57.027654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:06:50.893 [2024-07-13 02:55:57.198615] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.893 [2024-07-13 02:55:57.198692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.151 [2024-07-13 02:55:57.547784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.151 [2024-07-13 02:55:57.547903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.151 [2024-07-13 02:55:57.547946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.409 [2024-07-13 02:55:57.893167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.783 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 [2024-07-13 02:55:58.883134] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62105 has claimed it. 00:06:52.784 request: 00:06:52.784 { 00:06:52.784 "method": "framework_enable_cpumask_locks", 00:06:52.784 "req_id": 1 00:06:52.784 } 00:06:52.784 Got JSON-RPC error response 00:06:52.784 response: 00:06:52.784 { 00:06:52.784 "code": -32603, 00:06:52.784 "message": "Failed to claim CPU core: 2" 00:06:52.784 } 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62105 /var/tmp/spdk.sock 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62105 ']' 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.784 02:55:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62123 /var/tmp/spdk2.sock 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62123 ']' 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.784 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.042 00:06:53.042 real 0m3.566s 00:06:53.042 user 0m1.271s 00:06:53.042 sys 0m0.161s 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.042 02:55:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.042 ************************************ 00:06:53.042 END TEST locking_overlapped_coremask_via_rpc 00:06:53.042 ************************************ 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.043 02:55:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.043 02:55:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62105 ]] 00:06:53.043 02:55:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62105 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62105 ']' 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62105 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62105 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.043 killing process with pid 62105 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62105' 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62105 00:06:53.043 02:55:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62105 00:06:55.572 02:56:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62123 ]] 00:06:55.572 02:56:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62123 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62123 ']' 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62123 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62123 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:55.572 killing process with pid 62123 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62123' 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62123 00:06:55.572 02:56:01 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62123 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62105 ]] 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62105 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62105 ']' 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62105 00:06:56.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62105) - No such process 00:06:56.944 Process with pid 62105 is not found 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62105 is not found' 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62123 ]] 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62123 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62123 ']' 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62123 00:06:56.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62123) - No such process 00:06:56.944 Process with pid 62123 is not found 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62123 is not found' 00:06:56.944 02:56:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:56.944 ************************************ 00:06:56.944 END TEST cpu_locks 00:06:56.944 ************************************ 00:06:56.944 00:06:56.944 real 0m40.714s 00:06:56.944 user 1m9.447s 00:06:56.944 sys 0m5.535s 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.944 02:56:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.944 02:56:03 event -- common/autotest_common.sh@1142 -- # return 0 00:06:56.944 00:06:56.944 real 1m11.694s 00:06:56.944 user 2m9.921s 00:06:56.944 sys 0m9.132s 00:06:56.944 02:56:03 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.944 02:56:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.944 ************************************ 00:06:56.944 END TEST event 00:06:56.944 ************************************ 00:06:56.944 02:56:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.944 02:56:03 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:56.944 02:56:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.944 02:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.944 02:56:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.203 ************************************ 00:06:57.203 START TEST thread 00:06:57.203 ************************************ 00:06:57.203 02:56:03 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.203 * Looking for test storage... 00:06:57.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.203 02:56:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.203 02:56:03 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:57.203 02:56:03 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.203 02:56:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.203 ************************************ 00:06:57.203 START TEST thread_poller_perf 00:06:57.203 ************************************ 00:06:57.203 02:56:03 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.203 [2024-07-13 02:56:03.573513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.203 [2024-07-13 02:56:03.573704] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62292 ] 00:06:57.460 [2024-07-13 02:56:03.744141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.719 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.719 [2024-07-13 02:56:03.970594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.113 ====================================== 00:06:59.113 busy:2212289508 (cyc) 00:06:59.113 total_run_count: 341000 00:06:59.113 tsc_hz: 2200000000 (cyc) 00:06:59.113 ====================================== 00:06:59.113 poller_cost: 6487 (cyc), 2948 (nsec) 00:06:59.113 00:06:59.113 real 0m1.797s 00:06:59.113 user 0m1.597s 00:06:59.113 sys 0m0.090s 00:06:59.113 02:56:05 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.113 02:56:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.113 ************************************ 00:06:59.113 END TEST thread_poller_perf 00:06:59.113 ************************************ 00:06:59.113 02:56:05 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:59.113 02:56:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.113 02:56:05 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:59.113 02:56:05 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.113 02:56:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.113 ************************************ 00:06:59.113 START TEST thread_poller_perf 00:06:59.113 ************************************ 00:06:59.113 02:56:05 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.113 [2024-07-13 02:56:05.413531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.113 [2024-07-13 02:56:05.413684] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:06:59.113 [2024-07-13 02:56:05.566710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.371 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.371 [2024-07-13 02:56:05.721563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.745 ====================================== 00:07:00.745 busy:2203871150 (cyc) 00:07:00.745 total_run_count: 4560000 00:07:00.745 tsc_hz: 2200000000 (cyc) 00:07:00.745 ====================================== 00:07:00.745 poller_cost: 483 (cyc), 219 (nsec) 00:07:00.745 00:07:00.745 real 0m1.698s 00:07:00.745 user 0m1.512s 00:07:00.745 sys 0m0.078s 00:07:00.745 02:56:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.745 02:56:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.745 ************************************ 00:07:00.745 END TEST thread_poller_perf 00:07:00.745 ************************************ 00:07:00.745 02:56:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:00.745 02:56:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.745 00:07:00.745 real 0m3.684s 00:07:00.745 user 0m3.165s 00:07:00.745 sys 0m0.294s 00:07:00.745 02:56:07 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.745 02:56:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.745 ************************************ 00:07:00.745 END TEST thread 00:07:00.745 ************************************ 00:07:00.745 02:56:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.745 02:56:07 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:00.745 02:56:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.745 02:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.745 02:56:07 -- common/autotest_common.sh@10 -- # set +x 00:07:00.745 ************************************ 00:07:00.745 START TEST accel 00:07:00.745 ************************************ 00:07:00.745 02:56:07 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.004 * Looking for test storage... 00:07:01.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:01.004 02:56:07 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:01.004 02:56:07 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:01.004 02:56:07 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.004 02:56:07 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62414 00:07:01.004 02:56:07 accel -- accel/accel.sh@63 -- # waitforlisten 62414 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@829 -- # '[' -z 62414 ']' 00:07:01.004 02:56:07 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:01.004 02:56:07 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.004 02:56:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.004 02:56:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.004 02:56:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.004 02:56:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.004 02:56:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.004 02:56:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.004 02:56:07 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.004 02:56:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.004 [2024-07-13 02:56:07.359342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:01.004 [2024-07-13 02:56:07.359526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62414 ] 00:07:01.262 [2024-07-13 02:56:07.519512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.262 [2024-07-13 02:56:07.678507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.520 [2024-07-13 02:56:07.835289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@862 -- # return 0 00:07:02.086 02:56:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:02.086 02:56:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:02.086 02:56:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:02.086 02:56:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:02.086 02:56:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:02.086 02:56:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.086 02:56:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.086 02:56:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.086 02:56:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.086 02:56:08 accel -- accel/accel.sh@75 -- # killprocess 62414 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@948 -- # '[' -z 62414 ']' 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@952 -- # kill -0 62414 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@953 -- # uname 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62414 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.086 killing process with pid 62414 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62414' 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@967 -- # kill 62414 00:07:02.086 02:56:08 accel -- common/autotest_common.sh@972 -- # wait 62414 00:07:03.986 02:56:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:03.986 02:56:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 02:56:10 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:03.986 02:56:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:03.986 02:56:10 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.986 02:56:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.986 02:56:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.986 02:56:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 ************************************ 00:07:03.986 START TEST accel_missing_filename 00:07:03.986 ************************************ 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.986 02:56:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:03.986 02:56:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:03.986 [2024-07-13 02:56:10.361546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.986 [2024-07-13 02:56:10.361750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62483 ] 00:07:04.244 [2024-07-13 02:56:10.527877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.244 [2024-07-13 02:56:10.679560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.502 [2024-07-13 02:56:10.828608] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.760 [2024-07-13 02:56:11.226196] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:05.329 A filename is required. 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.329 00:07:05.329 real 0m1.251s 00:07:05.329 user 0m1.056s 00:07:05.329 sys 0m0.138s 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.329 02:56:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:05.329 ************************************ 00:07:05.329 END TEST accel_missing_filename 00:07:05.329 ************************************ 00:07:05.329 02:56:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.329 02:56:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.329 02:56:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:05.329 02:56:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.329 02:56:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.329 ************************************ 00:07:05.329 START TEST accel_compress_verify 00:07:05.329 ************************************ 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.329 02:56:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.329 02:56:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.329 [2024-07-13 02:56:11.655193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.329 [2024-07-13 02:56:11.655363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:07:05.329 [2024-07-13 02:56:11.806133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.588 [2024-07-13 02:56:11.957338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.847 [2024-07-13 02:56:12.104585] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.105 [2024-07-13 02:56:12.507917] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:06.362 00:07:06.362 Compression does not support the verify option, aborting. 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.362 00:07:06.362 real 0m1.222s 00:07:06.362 user 0m1.047s 00:07:06.362 sys 0m0.113s 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.362 ************************************ 00:07:06.362 END TEST accel_compress_verify 00:07:06.362 02:56:12 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:06.362 ************************************ 00:07:06.620 02:56:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.620 02:56:12 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:06.620 02:56:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.620 02:56:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.620 02:56:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 ************************************ 00:07:06.620 START TEST accel_wrong_workload 00:07:06.620 ************************************ 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:06.620 02:56:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:06.620 Unsupported workload type: foobar 00:07:06.620 [2024-07-13 02:56:12.935637] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:06.620 accel_perf options: 00:07:06.620 [-h help message] 00:07:06.620 [-q queue depth per core] 00:07:06.620 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:06.620 [-T number of threads per core 00:07:06.620 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:06.620 [-t time in seconds] 00:07:06.620 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:06.620 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:06.620 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:06.620 [-l for compress/decompress workloads, name of uncompressed input file 00:07:06.620 [-S for crc32c workload, use this seed value (default 0) 00:07:06.620 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:06.620 [-f for fill workload, use this BYTE value (default 255) 00:07:06.620 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:06.620 [-y verify result if this switch is on] 00:07:06.620 [-a tasks to allocate per core (default: same value as -q)] 00:07:06.620 Can be used to spread operations across a wider range of memory. 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.620 00:07:06.620 real 0m0.076s 00:07:06.620 user 0m0.090s 00:07:06.620 sys 0m0.033s 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.620 ************************************ 00:07:06.620 END TEST accel_wrong_workload 00:07:06.620 ************************************ 00:07:06.620 02:56:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 02:56:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.620 02:56:13 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.620 02:56:13 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:06.620 02:56:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.620 02:56:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 ************************************ 00:07:06.620 START TEST accel_negative_buffers 00:07:06.620 ************************************ 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:06.620 02:56:13 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:06.620 -x option must be non-negative. 00:07:06.620 [2024-07-13 02:56:13.059839] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:06.620 accel_perf options: 00:07:06.620 [-h help message] 00:07:06.620 [-q queue depth per core] 00:07:06.620 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:06.620 [-T number of threads per core 00:07:06.620 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:06.620 [-t time in seconds] 00:07:06.620 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:06.620 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:06.620 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:06.620 [-l for compress/decompress workloads, name of uncompressed input file 00:07:06.620 [-S for crc32c workload, use this seed value (default 0) 00:07:06.620 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:06.620 [-f for fill workload, use this BYTE value (default 255) 00:07:06.620 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:06.620 [-y verify result if this switch is on] 00:07:06.620 [-a tasks to allocate per core (default: same value as -q)] 00:07:06.620 Can be used to spread operations across a wider range of memory. 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.620 00:07:06.620 real 0m0.072s 00:07:06.620 user 0m0.090s 00:07:06.620 sys 0m0.032s 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.620 02:56:13 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 ************************************ 00:07:06.620 END TEST accel_negative_buffers 00:07:06.620 ************************************ 00:07:06.878 02:56:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.878 02:56:13 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:06.878 02:56:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:06.878 02:56:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.878 02:56:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.878 ************************************ 00:07:06.878 START TEST accel_crc32c 00:07:06.878 ************************************ 00:07:06.878 02:56:13 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:06.878 02:56:13 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:06.878 [2024-07-13 02:56:13.180627] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.879 [2024-07-13 02:56:13.180789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62583 ] 00:07:06.879 [2024-07-13 02:56:13.340235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.137 [2024-07-13 02:56:13.497806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.396 02:56:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.301 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:09.302 02:56:15 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.302 00:07:09.302 real 0m2.264s 00:07:09.302 user 0m2.019s 00:07:09.302 sys 0m0.152s 00:07:09.302 02:56:15 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.302 02:56:15 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:09.302 ************************************ 00:07:09.302 END TEST accel_crc32c 00:07:09.302 ************************************ 00:07:09.302 02:56:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.302 02:56:15 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:09.302 02:56:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.302 02:56:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.302 02:56:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.302 ************************************ 00:07:09.302 START TEST accel_crc32c_C2 00:07:09.302 ************************************ 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:09.302 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:09.302 [2024-07-13 02:56:15.484131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.302 [2024-07-13 02:56:15.484311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ] 00:07:09.302 [2024-07-13 02:56:15.636004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.302 [2024-07-13 02:56:15.784346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:09.562 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.563 02:56:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.468 00:07:11.468 real 0m2.266s 00:07:11.468 user 0m2.052s 00:07:11.468 sys 0m0.119s 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.468 02:56:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:11.468 ************************************ 00:07:11.468 END TEST accel_crc32c_C2 00:07:11.468 ************************************ 00:07:11.468 02:56:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.468 02:56:17 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:11.468 02:56:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.468 02:56:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.468 02:56:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.468 ************************************ 00:07:11.468 START TEST accel_copy 00:07:11.468 ************************************ 00:07:11.468 02:56:17 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.468 02:56:17 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.468 [2024-07-13 02:56:17.814018] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.468 [2024-07-13 02:56:17.814210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62676 ] 00:07:11.727 [2024-07-13 02:56:17.984966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.727 [2024-07-13 02:56:18.149803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.985 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.986 02:56:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:13.888 02:56:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.888 00:07:13.888 real 0m2.341s 00:07:13.888 user 0m2.087s 00:07:13.888 sys 0m0.158s 00:07:13.888 02:56:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.888 02:56:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 ************************************ 00:07:13.888 END TEST accel_copy 00:07:13.888 ************************************ 00:07:13.888 02:56:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.888 02:56:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.888 02:56:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:13.888 02:56:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.888 02:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 ************************************ 00:07:13.888 START TEST accel_fill 00:07:13.888 ************************************ 00:07:13.888 02:56:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:13.888 02:56:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:13.888 [2024-07-13 02:56:20.201519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.888 [2024-07-13 02:56:20.201680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62717 ] 00:07:13.888 [2024-07-13 02:56:20.369411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.147 [2024-07-13 02:56:20.521528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.405 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.406 02:56:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:16.332 02:56:22 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.332 00:07:16.332 real 0m2.327s 00:07:16.332 user 0m2.087s 00:07:16.332 sys 0m0.145s 00:07:16.332 ************************************ 00:07:16.332 END TEST accel_fill 00:07:16.332 ************************************ 00:07:16.332 02:56:22 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.332 02:56:22 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 02:56:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.332 02:56:22 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:16.332 02:56:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.332 02:56:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.332 02:56:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 ************************************ 00:07:16.332 START TEST accel_copy_crc32c 00:07:16.332 ************************************ 00:07:16.332 02:56:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:16.333 02:56:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:16.333 [2024-07-13 02:56:22.577946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:16.333 [2024-07-13 02:56:22.578099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62758 ] 00:07:16.333 [2024-07-13 02:56:22.749254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.598 [2024-07-13 02:56:22.924784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.856 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.857 02:56:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.759 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.760 00:07:18.760 real 0m2.386s 00:07:18.760 user 0m2.138s 00:07:18.760 sys 0m0.151s 00:07:18.760 ************************************ 00:07:18.760 END TEST accel_copy_crc32c 00:07:18.760 ************************************ 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.760 02:56:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:18.760 02:56:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.760 02:56:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:18.760 02:56:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:18.760 02:56:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.760 02:56:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.760 ************************************ 00:07:18.760 START TEST accel_copy_crc32c_C2 00:07:18.760 ************************************ 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:18.760 02:56:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:18.760 [2024-07-13 02:56:25.020266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.760 [2024-07-13 02:56:25.020436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62810 ] 00:07:18.760 [2024-07-13 02:56:25.189496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.019 [2024-07-13 02:56:25.362320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.278 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.279 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.279 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.279 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.279 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.279 02:56:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.184 00:07:21.184 real 0m2.329s 00:07:21.184 user 0m2.095s 00:07:21.184 sys 0m0.141s 00:07:21.184 ************************************ 00:07:21.184 END TEST accel_copy_crc32c_C2 00:07:21.184 ************************************ 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.184 02:56:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:21.184 02:56:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.184 02:56:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:21.184 02:56:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.184 02:56:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.184 02:56:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.184 ************************************ 00:07:21.184 START TEST accel_dualcast 00:07:21.184 ************************************ 00:07:21.184 02:56:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:21.184 02:56:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:21.184 [2024-07-13 02:56:27.405786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.184 [2024-07-13 02:56:27.406030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62851 ] 00:07:21.184 [2024-07-13 02:56:27.576546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.443 [2024-07-13 02:56:27.742609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.443 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:21.444 02:56:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:23.348 ************************************ 00:07:23.348 END TEST accel_dualcast 00:07:23.348 ************************************ 00:07:23.348 02:56:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.348 00:07:23.348 real 0m2.254s 00:07:23.348 user 0m2.029s 00:07:23.348 sys 0m0.131s 00:07:23.348 02:56:29 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.348 02:56:29 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:23.348 02:56:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.348 02:56:29 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:23.348 02:56:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.348 02:56:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.348 02:56:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.348 ************************************ 00:07:23.348 START TEST accel_compare 00:07:23.348 ************************************ 00:07:23.348 02:56:29 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:23.348 02:56:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:23.348 [2024-07-13 02:56:29.699860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:23.348 [2024-07-13 02:56:29.700019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62892 ] 00:07:23.607 [2024-07-13 02:56:29.856144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.607 [2024-07-13 02:56:30.007522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.865 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:23.866 02:56:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:25.768 02:56:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.768 00:07:25.768 real 0m2.227s 00:07:25.768 user 0m2.010s 00:07:25.768 sys 0m0.125s 00:07:25.768 02:56:31 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.768 ************************************ 00:07:25.768 END TEST accel_compare 00:07:25.768 ************************************ 00:07:25.768 02:56:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:25.768 02:56:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.768 02:56:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.768 02:56:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.768 02:56:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.768 02:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.768 ************************************ 00:07:25.768 START TEST accel_xor 00:07:25.768 ************************************ 00:07:25.768 02:56:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:25.768 02:56:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:25.768 [2024-07-13 02:56:31.995315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.768 [2024-07-13 02:56:31.995473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62939 ] 00:07:25.768 [2024-07-13 02:56:32.163645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.027 [2024-07-13 02:56:32.342514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.027 02:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 ************************************ 00:07:27.930 END TEST accel_xor 00:07:27.930 ************************************ 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.930 00:07:27.930 real 0m2.259s 00:07:27.930 user 0m2.005s 00:07:27.930 sys 0m0.162s 00:07:27.930 02:56:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.930 02:56:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:27.930 02:56:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.930 02:56:34 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:27.930 02:56:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.930 02:56:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.930 02:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.930 ************************************ 00:07:27.930 START TEST accel_xor 00:07:27.930 ************************************ 00:07:27.930 02:56:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:27.930 02:56:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:27.930 [2024-07-13 02:56:34.309359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.930 [2024-07-13 02:56:34.309542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62980 ] 00:07:28.189 [2024-07-13 02:56:34.480540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.189 [2024-07-13 02:56:34.629778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.449 02:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.355 02:56:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.356 02:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.356 02:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:30.356 02:56:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.356 00:07:30.356 real 0m2.257s 00:07:30.356 user 0m2.023s 00:07:30.356 sys 0m0.140s 00:07:30.356 02:56:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.356 02:56:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:30.356 ************************************ 00:07:30.356 END TEST accel_xor 00:07:30.356 ************************************ 00:07:30.356 02:56:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.356 02:56:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:30.356 02:56:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:30.356 02:56:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.356 02:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.356 ************************************ 00:07:30.356 START TEST accel_dif_verify 00:07:30.356 ************************************ 00:07:30.356 02:56:36 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:30.356 02:56:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:30.356 [2024-07-13 02:56:36.597749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.356 [2024-07-13 02:56:36.597911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63026 ] 00:07:30.356 [2024-07-13 02:56:36.749032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.615 [2024-07-13 02:56:36.906666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.615 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.616 02:56:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.546 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.546 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:32.547 02:56:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.547 00:07:32.547 real 0m2.217s 00:07:32.547 user 0m1.990s 00:07:32.547 sys 0m0.135s 00:07:32.547 02:56:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.547 ************************************ 00:07:32.547 END TEST accel_dif_verify 00:07:32.547 ************************************ 00:07:32.547 02:56:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.547 02:56:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.547 02:56:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:32.547 02:56:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:32.547 02:56:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.547 02:56:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.547 ************************************ 00:07:32.547 START TEST accel_dif_generate 00:07:32.547 ************************************ 00:07:32.547 02:56:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:32.547 02:56:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:32.547 [2024-07-13 02:56:38.877299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.547 [2024-07-13 02:56:38.877454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:07:32.806 [2024-07-13 02:56:39.045766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.806 [2024-07-13 02:56:39.193422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.066 02:56:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:34.971 02:56:41 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.971 00:07:34.971 real 0m2.228s 00:07:34.971 user 0m1.997s 00:07:34.971 sys 0m0.138s 00:07:34.971 02:56:41 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.971 02:56:41 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:34.971 ************************************ 00:07:34.971 END TEST accel_dif_generate 00:07:34.971 ************************************ 00:07:34.971 02:56:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.971 02:56:41 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:34.971 02:56:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:34.971 02:56:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.971 02:56:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.971 ************************************ 00:07:34.971 START TEST accel_dif_generate_copy 00:07:34.971 ************************************ 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.971 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:34.972 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:34.972 [2024-07-13 02:56:41.156176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:34.972 [2024-07-13 02:56:41.156493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63114 ] 00:07:34.972 [2024-07-13 02:56:41.324319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.231 [2024-07-13 02:56:41.478442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:35.231 02:56:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:37.136 ************************************ 00:07:37.136 END TEST accel_dif_generate_copy 00:07:37.136 ************************************ 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.136 00:07:37.136 real 0m2.240s 00:07:37.136 user 0m2.008s 00:07:37.136 sys 0m0.138s 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.136 02:56:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.136 02:56:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.136 02:56:43 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:37.136 02:56:43 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.136 02:56:43 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:37.136 02:56:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.136 02:56:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.136 ************************************ 00:07:37.136 START TEST accel_comp 00:07:37.136 ************************************ 00:07:37.136 02:56:43 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:37.136 02:56:43 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:37.136 [2024-07-13 02:56:43.444823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.137 [2024-07-13 02:56:43.445030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:07:37.137 [2024-07-13 02:56:43.611837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.395 [2024-07-13 02:56:43.760099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.654 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:37.655 02:56:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:39.559 02:56:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.559 00:07:39.559 real 0m2.242s 00:07:39.559 user 0m2.020s 00:07:39.559 sys 0m0.124s 00:07:39.559 02:56:45 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.559 02:56:45 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:39.559 ************************************ 00:07:39.559 END TEST accel_comp 00:07:39.559 ************************************ 00:07:39.559 02:56:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.559 02:56:45 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:39.559 02:56:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.559 02:56:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.559 02:56:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.559 ************************************ 00:07:39.559 START TEST accel_decomp 00:07:39.559 ************************************ 00:07:39.559 02:56:45 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.559 02:56:45 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.560 02:56:45 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.560 02:56:45 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.560 02:56:45 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.560 02:56:45 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:39.560 02:56:45 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:39.560 [2024-07-13 02:56:45.739938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.560 [2024-07-13 02:56:45.740103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:07:39.560 [2024-07-13 02:56:45.906457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.818 [2024-07-13 02:56:46.064440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.818 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:39.819 02:56:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.719 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.719 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.719 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.719 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.719 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.720 02:56:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.720 00:07:41.720 real 0m2.240s 00:07:41.720 user 0m2.000s 00:07:41.720 sys 0m0.143s 00:07:41.720 02:56:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.720 ************************************ 00:07:41.720 END TEST accel_decomp 00:07:41.720 ************************************ 00:07:41.720 02:56:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:41.720 02:56:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.720 02:56:47 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.720 02:56:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:41.720 02:56:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.720 02:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.720 ************************************ 00:07:41.720 START TEST accel_decomp_full 00:07:41.720 ************************************ 00:07:41.720 02:56:47 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:41.720 02:56:47 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:41.720 [2024-07-13 02:56:48.033075] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.720 [2024-07-13 02:56:48.033236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63237 ] 00:07:41.720 [2024-07-13 02:56:48.195679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.978 [2024-07-13 02:56:48.353423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.237 02:56:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.139 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.140 02:56:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.140 00:07:44.140 real 0m2.240s 00:07:44.140 user 0m2.025s 00:07:44.140 sys 0m0.125s 00:07:44.140 02:56:50 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.140 ************************************ 00:07:44.140 END TEST accel_decomp_full 00:07:44.140 ************************************ 00:07:44.140 02:56:50 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:44.140 02:56:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.140 02:56:50 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.140 02:56:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:44.140 02:56:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.140 02:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.140 ************************************ 00:07:44.140 START TEST accel_decomp_mcore 00:07:44.140 ************************************ 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:44.140 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.140 [2024-07-13 02:56:50.316464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.140 [2024-07-13 02:56:50.316583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63284 ] 00:07:44.140 [2024-07-13 02:56:50.473504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.140 [2024-07-13 02:56:50.627414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.140 [2024-07-13 02:56:50.627546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.140 [2024-07-13 02:56:50.628543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.140 [2024-07-13 02:56:50.628590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.399 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 02:56:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.303 ************************************ 00:07:46.303 END TEST accel_decomp_mcore 00:07:46.303 ************************************ 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.303 00:07:46.303 real 0m2.304s 00:07:46.303 user 0m0.020s 00:07:46.303 sys 0m0.002s 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.303 02:56:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 02:56:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.303 02:56:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.303 02:56:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.303 02:56:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.303 02:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 ************************************ 00:07:46.303 START TEST accel_decomp_full_mcore 00:07:46.303 ************************************ 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.303 02:56:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.303 [2024-07-13 02:56:52.657470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.303 [2024-07-13 02:56:52.657679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63333 ] 00:07:46.561 [2024-07-13 02:56:52.809730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.561 [2024-07-13 02:56:52.961317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.561 [2024-07-13 02:56:52.961427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.561 [2024-07-13 02:56:52.962784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.561 [2024-07-13 02:56:52.962812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.820 02:56:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.807 00:07:48.807 real 0m2.315s 00:07:48.807 user 0m0.022s 00:07:48.807 sys 0m0.002s 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.807 02:56:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:48.807 ************************************ 00:07:48.807 END TEST accel_decomp_full_mcore 00:07:48.807 ************************************ 00:07:48.807 02:56:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.807 02:56:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.807 02:56:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:48.807 02:56:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.807 02:56:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.807 ************************************ 00:07:48.807 START TEST accel_decomp_mthread 00:07:48.807 ************************************ 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.807 02:56:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.807 [2024-07-13 02:56:55.033939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.807 [2024-07-13 02:56:55.034116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63377 ] 00:07:48.807 [2024-07-13 02:56:55.200031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.090 [2024-07-13 02:56:55.369790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.090 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.091 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.091 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.091 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.091 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.091 02:56:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.994 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.995 00:07:50.995 real 0m2.262s 00:07:50.995 user 0m2.011s 00:07:50.995 sys 0m0.159s 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.995 02:56:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.995 ************************************ 00:07:50.995 END TEST accel_decomp_mthread 00:07:50.995 ************************************ 00:07:50.995 02:56:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.995 02:56:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.995 02:56:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:50.995 02:56:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.995 02:56:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.995 ************************************ 00:07:50.995 START TEST accel_decomp_full_mthread 00:07:50.995 ************************************ 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:50.995 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:50.995 [2024-07-13 02:56:57.350055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.995 [2024-07-13 02:56:57.350297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:07:51.253 [2024-07-13 02:56:57.516932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.253 [2024-07-13 02:56:57.665219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.512 02:56:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.418 00:07:53.418 real 0m2.287s 00:07:53.418 user 0m2.047s 00:07:53.418 sys 0m0.148s 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.418 02:56:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:53.418 ************************************ 00:07:53.418 END TEST accel_decomp_full_mthread 00:07:53.418 ************************************ 00:07:53.418 02:56:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.418 02:56:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:53.418 02:56:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.418 02:56:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:53.418 02:56:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.418 02:56:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.418 02:56:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.418 02:56:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.418 02:56:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.418 02:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.418 02:56:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.418 02:56:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.418 02:56:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:53.418 02:56:59 accel -- accel/accel.sh@41 -- # jq -r . 00:07:53.418 ************************************ 00:07:53.418 START TEST accel_dif_functional_tests 00:07:53.418 ************************************ 00:07:53.418 02:56:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.418 [2024-07-13 02:56:59.741555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:53.418 [2024-07-13 02:56:59.741743] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63466 ] 00:07:53.676 [2024-07-13 02:56:59.914560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.676 [2024-07-13 02:57:00.078957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.676 [2024-07-13 02:57:00.079073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.676 [2024-07-13 02:57:00.079099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.935 [2024-07-13 02:57:00.242605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.935 00:07:53.935 00:07:53.935 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.935 http://cunit.sourceforge.net/ 00:07:53.935 00:07:53.935 00:07:53.935 Suite: accel_dif 00:07:53.935 Test: verify: DIF generated, GUARD check ...passed 00:07:53.935 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.935 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.935 Test: verify: DIF not generated, GUARD check ...passed 00:07:53.935 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 02:57:00.328558] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.935 passed 00:07:53.935 Test: verify: DIF not generated, REFTAG check ...passed 00:07:53.935 Test: verify: APPTAG correct, APPTAG check ...[2024-07-13 02:57:00.328666] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.935 [2024-07-13 02:57:00.328720] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.935 passed 00:07:53.935 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:53.935 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:53.935 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-13 02:57:00.329044] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.935 passed 00:07:53.935 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.935 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:53.935 Test: verify copy: DIF generated, GUARD check ...[2024-07-13 02:57:00.329501] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.935 passed 00:07:53.935 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:53.935 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:53.935 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 02:57:00.330019] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.935 passed 00:07:53.935 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 02:57:00.330216] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.935 passed 00:07:53.935 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:53.935 Test: generate copy: DIF generated, GUARD check ...[2024-07-13 02:57:00.330276] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.935 passed 00:07:53.935 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.935 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.935 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.935 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.935 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.935 Test: generate copy: iovecs-len validate ...passed 00:07:53.935 Test: generate copy: buffer alignment validate ...passed 00:07:53.935 00:07:53.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.935 suites 1 1 n/a 0 0 00:07:53.935 tests 26 26 26 0 0 00:07:53.935 asserts 115 115 115 0 n/a 00:07:53.935 00:07:53.935 Elapsed time = 0.007 seconds 00:07:53.935 [2024-07-13 02:57:00.331125] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:54.871 00:07:54.871 real 0m1.702s 00:07:54.871 user 0m3.194s 00:07:54.871 sys 0m0.211s 00:07:54.871 02:57:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.871 02:57:01 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:54.871 ************************************ 00:07:54.871 END TEST accel_dif_functional_tests 00:07:54.871 ************************************ 00:07:55.130 02:57:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.130 00:07:55.130 real 0m54.211s 00:07:55.130 user 0m59.418s 00:07:55.130 sys 0m4.566s 00:07:55.130 02:57:01 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.130 02:57:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.130 ************************************ 00:07:55.130 END TEST accel 00:07:55.130 ************************************ 00:07:55.130 02:57:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.130 02:57:01 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:55.130 02:57:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.130 02:57:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.130 02:57:01 -- common/autotest_common.sh@10 -- # set +x 00:07:55.130 ************************************ 00:07:55.130 START TEST accel_rpc 00:07:55.130 ************************************ 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:55.130 * Looking for test storage... 00:07:55.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:55.130 02:57:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:55.130 02:57:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63548 00:07:55.130 02:57:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63548 00:07:55.130 02:57:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63548 ']' 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.130 02:57:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.130 [2024-07-13 02:57:01.608515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.130 [2024-07-13 02:57:01.608683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63548 ] 00:07:55.389 [2024-07-13 02:57:01.768357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.648 [2024-07-13 02:57:01.928851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.216 02:57:02 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.216 02:57:02 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:56.216 02:57:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:56.216 02:57:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:56.216 02:57:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:56.216 02:57:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:56.216 02:57:02 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:56.216 02:57:02 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.217 02:57:02 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.217 02:57:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.217 ************************************ 00:07:56.217 START TEST accel_assign_opcode 00:07:56.217 ************************************ 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:56.217 [2024-07-13 02:57:02.569756] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:56.217 [2024-07-13 02:57:02.577780] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.217 02:57:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:56.476 [2024-07-13 02:57:02.740989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:56.734 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.992 software 00:07:56.992 00:07:56.992 real 0m0.673s 00:07:56.992 user 0m0.052s 00:07:56.992 sys 0m0.012s 00:07:56.992 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.992 02:57:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:56.993 ************************************ 00:07:56.993 END TEST accel_assign_opcode 00:07:56.993 ************************************ 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:56.993 02:57:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63548 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63548 ']' 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63548 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63548 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.993 killing process with pid 63548 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63548' 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@967 -- # kill 63548 00:07:56.993 02:57:03 accel_rpc -- common/autotest_common.sh@972 -- # wait 63548 00:07:58.898 00:07:58.898 real 0m3.636s 00:07:58.898 user 0m3.750s 00:07:58.898 sys 0m0.435s 00:07:58.898 02:57:05 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.898 02:57:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.898 ************************************ 00:07:58.898 END TEST accel_rpc 00:07:58.898 ************************************ 00:07:58.898 02:57:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.898 02:57:05 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:58.898 02:57:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.898 02:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.898 02:57:05 -- common/autotest_common.sh@10 -- # set +x 00:07:58.898 ************************************ 00:07:58.898 START TEST app_cmdline 00:07:58.898 ************************************ 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:58.898 * Looking for test storage... 00:07:58.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.898 02:57:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.898 02:57:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63653 00:07:58.898 02:57:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.898 02:57:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63653 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63653 ']' 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.898 02:57:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.898 [2024-07-13 02:57:05.367764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.898 [2024-07-13 02:57:05.367964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63653 ] 00:07:59.157 [2024-07-13 02:57:05.532318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.415 [2024-07-13 02:57:05.742580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.415 [2024-07-13 02:57:05.886581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.983 02:57:06 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.983 02:57:06 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:59.983 02:57:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:00.242 { 00:08:00.242 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:00.242 "fields": { 00:08:00.242 "major": 24, 00:08:00.242 "minor": 9, 00:08:00.242 "patch": 0, 00:08:00.242 "suffix": "-pre", 00:08:00.242 "commit": "719d03c6a" 00:08:00.242 } 00:08:00.243 } 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.243 02:57:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.243 02:57:06 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.501 request: 00:08:00.501 { 00:08:00.501 "method": "env_dpdk_get_mem_stats", 00:08:00.501 "req_id": 1 00:08:00.501 } 00:08:00.501 Got JSON-RPC error response 00:08:00.501 response: 00:08:00.501 { 00:08:00.501 "code": -32601, 00:08:00.501 "message": "Method not found" 00:08:00.501 } 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.501 02:57:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63653 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63653 ']' 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63653 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63653 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.501 killing process with pid 63653 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63653' 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@967 -- # kill 63653 00:08:00.501 02:57:06 app_cmdline -- common/autotest_common.sh@972 -- # wait 63653 00:08:02.406 00:08:02.406 real 0m3.521s 00:08:02.406 user 0m3.974s 00:08:02.406 sys 0m0.498s 00:08:02.406 02:57:08 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.406 ************************************ 00:08:02.406 END TEST app_cmdline 00:08:02.406 ************************************ 00:08:02.406 02:57:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.406 02:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:02.406 02:57:08 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.406 02:57:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.406 02:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.406 02:57:08 -- common/autotest_common.sh@10 -- # set +x 00:08:02.406 ************************************ 00:08:02.406 START TEST version 00:08:02.406 ************************************ 00:08:02.406 02:57:08 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:02.406 * Looking for test storage... 00:08:02.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.406 02:57:08 version -- app/version.sh@17 -- # get_header_version major 00:08:02.406 02:57:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # cut -f2 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.406 02:57:08 version -- app/version.sh@17 -- # major=24 00:08:02.406 02:57:08 version -- app/version.sh@18 -- # get_header_version minor 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # cut -f2 00:08:02.406 02:57:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.406 02:57:08 version -- app/version.sh@18 -- # minor=9 00:08:02.406 02:57:08 version -- app/version.sh@19 -- # get_header_version patch 00:08:02.406 02:57:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # cut -f2 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.406 02:57:08 version -- app/version.sh@19 -- # patch=0 00:08:02.406 02:57:08 version -- app/version.sh@20 -- # get_header_version suffix 00:08:02.406 02:57:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # cut -f2 00:08:02.406 02:57:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:02.406 02:57:08 version -- app/version.sh@20 -- # suffix=-pre 00:08:02.406 02:57:08 version -- app/version.sh@22 -- # version=24.9 00:08:02.406 02:57:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:02.406 02:57:08 version -- app/version.sh@28 -- # version=24.9rc0 00:08:02.406 02:57:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:02.406 02:57:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:02.406 02:57:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:02.406 02:57:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:02.406 00:08:02.406 real 0m0.145s 00:08:02.406 user 0m0.069s 00:08:02.406 sys 0m0.106s 00:08:02.406 02:57:08 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.406 02:57:08 version -- common/autotest_common.sh@10 -- # set +x 00:08:02.406 ************************************ 00:08:02.406 END TEST version 00:08:02.406 ************************************ 00:08:02.406 02:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:02.406 02:57:08 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:02.406 02:57:08 -- spdk/autotest.sh@198 -- # uname -s 00:08:02.406 02:57:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:02.406 02:57:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:02.406 02:57:08 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:08:02.406 02:57:08 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:08:02.406 02:57:08 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:02.406 02:57:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.406 02:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.406 02:57:08 -- common/autotest_common.sh@10 -- # set +x 00:08:02.406 ************************************ 00:08:02.406 START TEST spdk_dd 00:08:02.406 ************************************ 00:08:02.406 02:57:08 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:02.666 * Looking for test storage... 00:08:02.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.666 02:57:08 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.666 02:57:08 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.666 02:57:08 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.666 02:57:08 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.666 02:57:08 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.666 02:57:08 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.666 02:57:08 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.666 02:57:08 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:02.666 02:57:08 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.666 02:57:08 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:02.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:02.926 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:02.926 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:02.926 02:57:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:02.926 02:57:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@230 -- # local class 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@232 -- # local progif 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@233 -- # class=01 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:08:02.926 02:57:09 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:02.926 02:57:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@139 -- # local lib so 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:02.926 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:02.927 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:08:03.187 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:03.188 * spdk_dd linked to liburing 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:03.188 02:57:09 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:03.188 02:57:09 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:03.189 02:57:09 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:08:03.189 02:57:09 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:03.189 02:57:09 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:03.189 02:57:09 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:03.189 02:57:09 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:08:03.189 02:57:09 spdk_dd -- dd/common.sh@157 -- # return 0 00:08:03.189 02:57:09 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:03.189 02:57:09 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:03.189 02:57:09 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.189 02:57:09 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.189 02:57:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:03.189 ************************************ 00:08:03.189 START TEST spdk_dd_basic_rw 00:08:03.189 ************************************ 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:03.189 * Looking for test storage... 00:08:03.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:03.189 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:03.451 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:03.451 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.452 ************************************ 00:08:03.452 START TEST dd_bs_lt_native_bs 00:08:03.452 ************************************ 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.452 02:57:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:03.452 { 00:08:03.452 "subsystems": [ 00:08:03.452 { 00:08:03.452 "subsystem": "bdev", 00:08:03.452 "config": [ 00:08:03.452 { 00:08:03.452 "params": { 00:08:03.452 "trtype": "pcie", 00:08:03.452 "traddr": "0000:00:10.0", 00:08:03.452 "name": "Nvme0" 00:08:03.452 }, 00:08:03.452 "method": "bdev_nvme_attach_controller" 00:08:03.452 }, 00:08:03.452 { 00:08:03.452 "method": "bdev_wait_for_examine" 00:08:03.452 } 00:08:03.452 ] 00:08:03.452 } 00:08:03.452 ] 00:08:03.452 } 00:08:03.711 [2024-07-13 02:57:09.945414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.711 [2024-07-13 02:57:09.945596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63997 ] 00:08:03.711 [2024-07-13 02:57:10.120151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.971 [2024-07-13 02:57:10.345325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.230 [2024-07-13 02:57:10.516604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.230 [2024-07-13 02:57:10.679420] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:04.230 [2024-07-13 02:57:10.679510] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.810 [2024-07-13 02:57:11.077454] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.091 00:08:05.091 real 0m1.609s 00:08:05.091 user 0m1.349s 00:08:05.091 sys 0m0.208s 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:05.091 ************************************ 00:08:05.091 END TEST dd_bs_lt_native_bs 00:08:05.091 ************************************ 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.091 ************************************ 00:08:05.091 START TEST dd_rw 00:08:05.091 ************************************ 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:05.091 02:57:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.658 02:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:05.658 02:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:05.658 02:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.658 02:57:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.658 { 00:08:05.658 "subsystems": [ 00:08:05.658 { 00:08:05.658 "subsystem": "bdev", 00:08:05.658 "config": [ 00:08:05.658 { 00:08:05.658 "params": { 00:08:05.658 "trtype": "pcie", 00:08:05.658 "traddr": "0000:00:10.0", 00:08:05.658 "name": "Nvme0" 00:08:05.658 }, 00:08:05.658 "method": "bdev_nvme_attach_controller" 00:08:05.658 }, 00:08:05.658 { 00:08:05.658 "method": "bdev_wait_for_examine" 00:08:05.658 } 00:08:05.658 ] 00:08:05.658 } 00:08:05.658 ] 00:08:05.658 } 00:08:05.916 [2024-07-13 02:57:12.167219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.916 [2024-07-13 02:57:12.167408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64040 ] 00:08:05.916 [2024-07-13 02:57:12.333029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.174 [2024-07-13 02:57:12.498537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.174 [2024-07-13 02:57:12.655535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.366  Copying: 60/60 [kB] (average 29 MBps) 00:08:07.366 00:08:07.366 02:57:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:07.366 02:57:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:07.366 02:57:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.366 02:57:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.366 { 00:08:07.366 "subsystems": [ 00:08:07.366 { 00:08:07.366 "subsystem": "bdev", 00:08:07.366 "config": [ 00:08:07.366 { 00:08:07.366 "params": { 00:08:07.366 "trtype": "pcie", 00:08:07.366 "traddr": "0000:00:10.0", 00:08:07.366 "name": "Nvme0" 00:08:07.366 }, 00:08:07.366 "method": "bdev_nvme_attach_controller" 00:08:07.366 }, 00:08:07.366 { 00:08:07.366 "method": "bdev_wait_for_examine" 00:08:07.366 } 00:08:07.366 ] 00:08:07.366 } 00:08:07.366 ] 00:08:07.366 } 00:08:07.366 [2024-07-13 02:57:13.832058] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.367 [2024-07-13 02:57:13.832206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64070 ] 00:08:07.624 [2024-07-13 02:57:13.982714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.882 [2024-07-13 02:57:14.130022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.882 [2024-07-13 02:57:14.275696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.077  Copying: 60/60 [kB] (average 19 MBps) 00:08:09.077 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.077 02:57:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.077 { 00:08:09.077 "subsystems": [ 00:08:09.077 { 00:08:09.077 "subsystem": "bdev", 00:08:09.077 "config": [ 00:08:09.077 { 00:08:09.077 "params": { 00:08:09.077 "trtype": "pcie", 00:08:09.077 "traddr": "0000:00:10.0", 00:08:09.077 "name": "Nvme0" 00:08:09.077 }, 00:08:09.077 "method": "bdev_nvme_attach_controller" 00:08:09.077 }, 00:08:09.077 { 00:08:09.077 "method": "bdev_wait_for_examine" 00:08:09.077 } 00:08:09.077 ] 00:08:09.077 } 00:08:09.077 ] 00:08:09.077 } 00:08:09.077 [2024-07-13 02:57:15.335204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.077 [2024-07-13 02:57:15.335401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64093 ] 00:08:09.077 [2024-07-13 02:57:15.504484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.336 [2024-07-13 02:57:15.667196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.594 [2024-07-13 02:57:15.837242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.529  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:10.529 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:10.529 02:57:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.096 02:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:11.096 02:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:11.096 02:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.096 02:57:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 { 00:08:11.354 "subsystems": [ 00:08:11.354 { 00:08:11.354 "subsystem": "bdev", 00:08:11.354 "config": [ 00:08:11.354 { 00:08:11.354 "params": { 00:08:11.354 "trtype": "pcie", 00:08:11.354 "traddr": "0000:00:10.0", 00:08:11.354 "name": "Nvme0" 00:08:11.354 }, 00:08:11.354 "method": "bdev_nvme_attach_controller" 00:08:11.354 }, 00:08:11.354 { 00:08:11.354 "method": "bdev_wait_for_examine" 00:08:11.354 } 00:08:11.354 ] 00:08:11.354 } 00:08:11.354 ] 00:08:11.354 } 00:08:11.354 [2024-07-13 02:57:17.656859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.354 [2024-07-13 02:57:17.657114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64125 ] 00:08:11.354 [2024-07-13 02:57:17.821519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.612 [2024-07-13 02:57:17.992953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.870 [2024-07-13 02:57:18.158606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.806  Copying: 60/60 [kB] (average 58 MBps) 00:08:12.806 00:08:12.806 02:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:12.806 02:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.806 02:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.806 02:57:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.806 { 00:08:12.806 "subsystems": [ 00:08:12.806 { 00:08:12.806 "subsystem": "bdev", 00:08:12.806 "config": [ 00:08:12.806 { 00:08:12.806 "params": { 00:08:12.806 "trtype": "pcie", 00:08:12.806 "traddr": "0000:00:10.0", 00:08:12.806 "name": "Nvme0" 00:08:12.806 }, 00:08:12.806 "method": "bdev_nvme_attach_controller" 00:08:12.806 }, 00:08:12.806 { 00:08:12.806 "method": "bdev_wait_for_examine" 00:08:12.806 } 00:08:12.806 ] 00:08:12.806 } 00:08:12.806 ] 00:08:12.806 } 00:08:12.806 [2024-07-13 02:57:19.258035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.806 [2024-07-13 02:57:19.258241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64156 ] 00:08:13.063 [2024-07-13 02:57:19.428513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.322 [2024-07-13 02:57:19.598665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.322 [2024-07-13 02:57:19.744749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.513  Copying: 60/60 [kB] (average 58 MBps) 00:08:14.513 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.513 02:57:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.513 { 00:08:14.513 "subsystems": [ 00:08:14.513 { 00:08:14.513 "subsystem": "bdev", 00:08:14.513 "config": [ 00:08:14.513 { 00:08:14.513 "params": { 00:08:14.513 "trtype": "pcie", 00:08:14.513 "traddr": "0000:00:10.0", 00:08:14.513 "name": "Nvme0" 00:08:14.513 }, 00:08:14.513 "method": "bdev_nvme_attach_controller" 00:08:14.513 }, 00:08:14.513 { 00:08:14.513 "method": "bdev_wait_for_examine" 00:08:14.513 } 00:08:14.513 ] 00:08:14.513 } 00:08:14.513 ] 00:08:14.513 } 00:08:14.513 [2024-07-13 02:57:20.928287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.513 [2024-07-13 02:57:20.928423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64184 ] 00:08:14.771 [2024-07-13 02:57:21.081157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.771 [2024-07-13 02:57:21.237391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.030 [2024-07-13 02:57:21.387150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:15.855  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.855 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:15.855 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.424 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:16.424 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:16.424 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:16.424 02:57:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.683 { 00:08:16.683 "subsystems": [ 00:08:16.683 { 00:08:16.683 "subsystem": "bdev", 00:08:16.683 "config": [ 00:08:16.683 { 00:08:16.683 "params": { 00:08:16.683 "trtype": "pcie", 00:08:16.683 "traddr": "0000:00:10.0", 00:08:16.683 "name": "Nvme0" 00:08:16.683 }, 00:08:16.683 "method": "bdev_nvme_attach_controller" 00:08:16.683 }, 00:08:16.683 { 00:08:16.683 "method": "bdev_wait_for_examine" 00:08:16.683 } 00:08:16.683 ] 00:08:16.683 } 00:08:16.683 ] 00:08:16.683 } 00:08:16.683 [2024-07-13 02:57:22.959359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:16.683 [2024-07-13 02:57:22.959504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64216 ] 00:08:16.683 [2024-07-13 02:57:23.105564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.941 [2024-07-13 02:57:23.263134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.941 [2024-07-13 02:57:23.418467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.133  Copying: 56/56 [kB] (average 54 MBps) 00:08:18.133 00:08:18.133 02:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:18.133 02:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:18.133 02:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:18.133 02:57:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:18.133 { 00:08:18.133 "subsystems": [ 00:08:18.133 { 00:08:18.133 "subsystem": "bdev", 00:08:18.133 "config": [ 00:08:18.133 { 00:08:18.133 "params": { 00:08:18.133 "trtype": "pcie", 00:08:18.133 "traddr": "0000:00:10.0", 00:08:18.133 "name": "Nvme0" 00:08:18.133 }, 00:08:18.133 "method": "bdev_nvme_attach_controller" 00:08:18.133 }, 00:08:18.133 { 00:08:18.133 "method": "bdev_wait_for_examine" 00:08:18.133 } 00:08:18.133 ] 00:08:18.133 } 00:08:18.133 ] 00:08:18.133 } 00:08:18.392 [2024-07-13 02:57:24.642613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.392 [2024-07-13 02:57:24.642801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64241 ] 00:08:18.392 [2024-07-13 02:57:24.813627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.651 [2024-07-13 02:57:24.983066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.651 [2024-07-13 02:57:25.138085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.878  Copying: 56/56 [kB] (average 27 MBps) 00:08:19.878 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:19.878 02:57:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:19.878 { 00:08:19.878 "subsystems": [ 00:08:19.878 { 00:08:19.878 "subsystem": "bdev", 00:08:19.878 "config": [ 00:08:19.878 { 00:08:19.878 "params": { 00:08:19.878 "trtype": "pcie", 00:08:19.878 "traddr": "0000:00:10.0", 00:08:19.878 "name": "Nvme0" 00:08:19.878 }, 00:08:19.878 "method": "bdev_nvme_attach_controller" 00:08:19.878 }, 00:08:19.878 { 00:08:19.878 "method": "bdev_wait_for_examine" 00:08:19.878 } 00:08:19.878 ] 00:08:19.878 } 00:08:19.878 ] 00:08:19.878 } 00:08:19.878 [2024-07-13 02:57:26.229281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:19.878 [2024-07-13 02:57:26.229445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64269 ] 00:08:20.145 [2024-07-13 02:57:26.393887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.145 [2024-07-13 02:57:26.569392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.403 [2024-07-13 02:57:26.718938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.339  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:21.339 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:21.598 02:57:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 02:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:22.166 02:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:22.166 02:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:22.166 02:57:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 { 00:08:22.166 "subsystems": [ 00:08:22.166 { 00:08:22.166 "subsystem": "bdev", 00:08:22.166 "config": [ 00:08:22.166 { 00:08:22.166 "params": { 00:08:22.166 "trtype": "pcie", 00:08:22.166 "traddr": "0000:00:10.0", 00:08:22.166 "name": "Nvme0" 00:08:22.166 }, 00:08:22.166 "method": "bdev_nvme_attach_controller" 00:08:22.166 }, 00:08:22.166 { 00:08:22.166 "method": "bdev_wait_for_examine" 00:08:22.166 } 00:08:22.166 ] 00:08:22.166 } 00:08:22.166 ] 00:08:22.166 } 00:08:22.166 [2024-07-13 02:57:28.465194] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:22.166 [2024-07-13 02:57:28.465367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64300 ] 00:08:22.166 [2024-07-13 02:57:28.625184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.425 [2024-07-13 02:57:28.782422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.685 [2024-07-13 02:57:28.931720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.621  Copying: 56/56 [kB] (average 54 MBps) 00:08:23.621 00:08:23.621 02:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:23.621 02:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:23.621 02:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.621 02:57:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.621 { 00:08:23.621 "subsystems": [ 00:08:23.621 { 00:08:23.621 "subsystem": "bdev", 00:08:23.621 "config": [ 00:08:23.621 { 00:08:23.621 "params": { 00:08:23.621 "trtype": "pcie", 00:08:23.621 "traddr": "0000:00:10.0", 00:08:23.621 "name": "Nvme0" 00:08:23.621 }, 00:08:23.621 "method": "bdev_nvme_attach_controller" 00:08:23.621 }, 00:08:23.621 { 00:08:23.621 "method": "bdev_wait_for_examine" 00:08:23.621 } 00:08:23.621 ] 00:08:23.621 } 00:08:23.621 ] 00:08:23.621 } 00:08:23.621 [2024-07-13 02:57:30.006108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:23.621 [2024-07-13 02:57:30.006288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64325 ] 00:08:23.880 [2024-07-13 02:57:30.176207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.880 [2024-07-13 02:57:30.324630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.139 [2024-07-13 02:57:30.477450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.518  Copying: 56/56 [kB] (average 54 MBps) 00:08:25.518 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:25.518 02:57:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:25.518 { 00:08:25.518 "subsystems": [ 00:08:25.518 { 00:08:25.518 "subsystem": "bdev", 00:08:25.518 "config": [ 00:08:25.518 { 00:08:25.518 "params": { 00:08:25.518 "trtype": "pcie", 00:08:25.518 "traddr": "0000:00:10.0", 00:08:25.518 "name": "Nvme0" 00:08:25.518 }, 00:08:25.518 "method": "bdev_nvme_attach_controller" 00:08:25.518 }, 00:08:25.518 { 00:08:25.518 "method": "bdev_wait_for_examine" 00:08:25.518 } 00:08:25.518 ] 00:08:25.518 } 00:08:25.518 ] 00:08:25.518 } 00:08:25.518 [2024-07-13 02:57:31.683453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:25.518 [2024-07-13 02:57:31.683580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64353 ] 00:08:25.518 [2024-07-13 02:57:31.840671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.518 [2024-07-13 02:57:32.002493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.777 [2024-07-13 02:57:32.161467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.973  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:26.973 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:26.973 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.232 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:27.232 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:27.232 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.232 02:57:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.232 { 00:08:27.232 "subsystems": [ 00:08:27.232 { 00:08:27.232 "subsystem": "bdev", 00:08:27.232 "config": [ 00:08:27.232 { 00:08:27.232 "params": { 00:08:27.232 "trtype": "pcie", 00:08:27.232 "traddr": "0000:00:10.0", 00:08:27.232 "name": "Nvme0" 00:08:27.232 }, 00:08:27.232 "method": "bdev_nvme_attach_controller" 00:08:27.232 }, 00:08:27.232 { 00:08:27.232 "method": "bdev_wait_for_examine" 00:08:27.232 } 00:08:27.232 ] 00:08:27.232 } 00:08:27.232 ] 00:08:27.232 } 00:08:27.232 [2024-07-13 02:57:33.675990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:27.232 [2024-07-13 02:57:33.676161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64384 ] 00:08:27.514 [2024-07-13 02:57:33.833551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.514 [2024-07-13 02:57:33.990269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.772 [2024-07-13 02:57:34.150703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.969  Copying: 48/48 [kB] (average 46 MBps) 00:08:28.969 00:08:28.969 02:57:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:28.969 02:57:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:28.969 02:57:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:28.969 02:57:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:28.969 { 00:08:28.969 "subsystems": [ 00:08:28.969 { 00:08:28.969 "subsystem": "bdev", 00:08:28.969 "config": [ 00:08:28.969 { 00:08:28.969 "params": { 00:08:28.969 "trtype": "pcie", 00:08:28.969 "traddr": "0000:00:10.0", 00:08:28.969 "name": "Nvme0" 00:08:28.969 }, 00:08:28.969 "method": "bdev_nvme_attach_controller" 00:08:28.969 }, 00:08:28.969 { 00:08:28.969 "method": "bdev_wait_for_examine" 00:08:28.969 } 00:08:28.969 ] 00:08:28.969 } 00:08:28.969 ] 00:08:28.969 } 00:08:28.969 [2024-07-13 02:57:35.414770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.969 [2024-07-13 02:57:35.414957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64415 ] 00:08:29.229 [2024-07-13 02:57:35.584018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.488 [2024-07-13 02:57:35.732807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.488 [2024-07-13 02:57:35.877088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.314  Copying: 48/48 [kB] (average 46 MBps) 00:08:30.314 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:30.573 02:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:30.573 { 00:08:30.573 "subsystems": [ 00:08:30.573 { 00:08:30.573 "subsystem": "bdev", 00:08:30.573 "config": [ 00:08:30.573 { 00:08:30.573 "params": { 00:08:30.573 "trtype": "pcie", 00:08:30.573 "traddr": "0000:00:10.0", 00:08:30.573 "name": "Nvme0" 00:08:30.573 }, 00:08:30.573 "method": "bdev_nvme_attach_controller" 00:08:30.573 }, 00:08:30.573 { 00:08:30.573 "method": "bdev_wait_for_examine" 00:08:30.573 } 00:08:30.573 ] 00:08:30.573 } 00:08:30.573 ] 00:08:30.573 } 00:08:30.573 [2024-07-13 02:57:36.922669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.573 [2024-07-13 02:57:36.922840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64437 ] 00:08:30.832 [2024-07-13 02:57:37.094089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.832 [2024-07-13 02:57:37.263451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.090 [2024-07-13 02:57:37.411309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.027  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:32.027 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:32.028 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.594 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:32.594 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:32.594 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.594 02:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.594 { 00:08:32.594 "subsystems": [ 00:08:32.594 { 00:08:32.594 "subsystem": "bdev", 00:08:32.594 "config": [ 00:08:32.594 { 00:08:32.594 "params": { 00:08:32.594 "trtype": "pcie", 00:08:32.594 "traddr": "0000:00:10.0", 00:08:32.594 "name": "Nvme0" 00:08:32.594 }, 00:08:32.594 "method": "bdev_nvme_attach_controller" 00:08:32.594 }, 00:08:32.595 { 00:08:32.595 "method": "bdev_wait_for_examine" 00:08:32.595 } 00:08:32.595 ] 00:08:32.595 } 00:08:32.595 ] 00:08:32.595 } 00:08:32.595 [2024-07-13 02:57:39.004266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:32.595 [2024-07-13 02:57:39.004480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64468 ] 00:08:32.853 [2024-07-13 02:57:39.161470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.853 [2024-07-13 02:57:39.329742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.111 [2024-07-13 02:57:39.492847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.305  Copying: 48/48 [kB] (average 46 MBps) 00:08:34.305 00:08:34.305 02:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:34.305 02:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:34.305 02:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:34.305 02:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.305 { 00:08:34.305 "subsystems": [ 00:08:34.305 { 00:08:34.305 "subsystem": "bdev", 00:08:34.305 "config": [ 00:08:34.305 { 00:08:34.305 "params": { 00:08:34.305 "trtype": "pcie", 00:08:34.305 "traddr": "0000:00:10.0", 00:08:34.305 "name": "Nvme0" 00:08:34.305 }, 00:08:34.305 "method": "bdev_nvme_attach_controller" 00:08:34.305 }, 00:08:34.305 { 00:08:34.305 "method": "bdev_wait_for_examine" 00:08:34.305 } 00:08:34.305 ] 00:08:34.305 } 00:08:34.305 ] 00:08:34.305 } 00:08:34.305 [2024-07-13 02:57:40.599608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:34.305 [2024-07-13 02:57:40.599753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64499 ] 00:08:34.305 [2024-07-13 02:57:40.753480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.573 [2024-07-13 02:57:40.916673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.844 [2024-07-13 02:57:41.072557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.781  Copying: 48/48 [kB] (average 46 MBps) 00:08:35.781 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:35.781 02:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:35.781 { 00:08:35.781 "subsystems": [ 00:08:35.781 { 00:08:35.781 "subsystem": "bdev", 00:08:35.781 "config": [ 00:08:35.781 { 00:08:35.781 "params": { 00:08:35.781 "trtype": "pcie", 00:08:35.781 "traddr": "0000:00:10.0", 00:08:35.781 "name": "Nvme0" 00:08:35.781 }, 00:08:35.781 "method": "bdev_nvme_attach_controller" 00:08:35.781 }, 00:08:35.781 { 00:08:35.781 "method": "bdev_wait_for_examine" 00:08:35.781 } 00:08:35.781 ] 00:08:35.781 } 00:08:35.781 ] 00:08:35.781 } 00:08:36.040 [2024-07-13 02:57:42.319836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:36.040 [2024-07-13 02:57:42.320075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ] 00:08:36.040 [2024-07-13 02:57:42.475965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.299 [2024-07-13 02:57:42.641354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.569 [2024-07-13 02:57:42.803756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.509  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:37.509 00:08:37.509 00:08:37.509 real 0m32.294s 00:08:37.509 user 0m27.518s 00:08:37.509 sys 0m13.161s 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.509 ************************************ 00:08:37.509 END TEST dd_rw 00:08:37.509 ************************************ 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 ************************************ 00:08:37.509 START TEST dd_rw_offset 00:08:37.509 ************************************ 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:37.509 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:37.510 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=xcuteflr3lxvyoli6riyzdnc8suhebab1gyjw4vfqr0eiv2wfqg17xmy504jd38ctd575fb9ds0jpgpwnqqp9hzyhwp2hpm42b2batgvs1j3kcrjr2bvw09vizewb2bkwikq4r9k7s7buf1ounip2776kfm0m9pmu5woczh4y4wrzpmub4ywaf9e59j3hwc2j6cba537zgvuq2150ov5hn7n2hb4b3z9b3lfo7oamg0al5ydh0dssknjom14myjzqic98egyayga6tdjaqjhanyugweus2fbsu85fbhh4xs6ma01vfynngmcbc63lqmh25pebkqcmhpyv4y8tmengj76s4zaad1uebzv6gdewj5ql566cgl80sjk5z6dfcce07kgs0266mee9nsorkiukjt06mza6snrn0e7hetdqwwsze3svjphuyeisq6zic5paazm9ev4g0ey8m1wlppscmogvds10uucjky2zuhal2njynnniwcpvlzupx2gcmzuk2125z63jm9ufpcbk1enmcf655l3t57qvgoek8r6gg2huzmryz8js8rjit0kbaefx4ozou1v64cbnobggzznemeeccnflqv4rxuj1rm142ny77f6nvb9ek2kwm3ngmwjwymje8fk70ai1gy00o92j83cb3awjus2fwc245v73mecdxzg20m75b9lnm95y1tuhyhrd9qthmpfecc7hm9t5w6uo5vdbdbqyoz3sw6up912xhz8y970igjco8kfeniutzd1w5qrlk0zu91e0fo93r6x41wwddey804yfry421n5gq2ayw11zqoay7d9jocui4bhehoedt3i7x1pyu3uu91i496bvud26owvbijtafq11s5rjkua2ui39234b0qs9fcg125vmndvv4vvrvkqmrude9i8x40u527hh2nia3ihdmb2pdxs7qr3uoa4tabqw1f0rfchgu1qs1uv1h54m9lrztey8so62frx956x37qd281fhpvh9ln3sshen1eefvircq6dnxhfhqm0suq8xy2dh35ovvy71odnmw31sjv6uecyr33pjqd5gos9r70hnlu18b3kbbpl5mx5wud0a0dc4cn1yqtbmd55giy40z889coxch0rpz8sc9w99alsjfafwnmy88odoccy28ulr560oomc9fddg6c7nicq10fn5ajhlvuhqmidnonj369r374ea6oi70fzsk2znq3vu76vvhpq7fg6xi755ewbyy22vm3wyrtvmnvc2h3ddbqyd1aridoh7117lw56ef8iljr7jp92gmvl7whb578qnur3br0iubwynnw8doz1vs9c7hlia83v408r7mt6f6s6ba2bzu2300zneyhmxgwqcgyppxcpmhljl6oz4r661b7uxkywh9k9eska0a41f1ha10gwcn14h64tupw7wzkahbpq2pa7wplfpkm3pks1l1d63vq8ouo9shtx3q31xdx2t2ue4oms8nermlzfn6of48epy00xy8l1aow0it5c88ucb11bou4pqtuk0veh4febz8wxs4hlvjfpwwf6t5edd1535c8snq71ptosoe2uglxaiq6gy6dwxav4s0l5wqqs0wa8t2jmtm01zfmfe0w8zv545v4e5rgesgooyngtm9wimursi0xxuo3adw3m3wl3vim8nn66qjq009xoi9m59llbadeobiivtzlt2uyf743i9twehfzkpos2gx7c6412tp2gg338q44l3050xpoliek8fb37uu5jsd1hzcj6s9dqjjdyia3wph8evs1spt3qv6p03owfffp02z4sb3m9xapmua8s5ggd5vw4bcg43rj1u7ed7fl2otu27v2xstf9xptqbs1bcvy4kadsuyvi9xumsdpeejsnffa26ezi8dk45kjvuqfkrcx41psqei7galcsrpbywgogbidqgjdlwu1uylboe7vdt953yn6xu36h41k5o9dsippmh6n64quov8sfptohwp6w41numm4mk2dapiatl7uztooreypz1z1pjf7780ob0dm8b9tcu08rrjvqpchx8oob22w7nk3riyz24hes7g2hr69lguk9elckkeqv6cifv5alnv08p6oy68ysmd7muuilf178f2x3f1wqxiq30l01452pm9n08qx26mrcm4omi4yzbff973uimf7nc8hyok2433n7l4ibubto0ye5gyouosjyb14fkglz96fkx03hcc80h6lvhvkz6594d2nwetkpy015tsbsjqjqc0kv0ryy2t86hq1awtg0397cvquuu7gx6tk189ktszcl9iivw54s65xotab6d112pcvf2zrprppar6u4vniz7bux6z3j6tyzer2crlrq6vsrs638s0fm5sk6p4bqzcmrf5digrhtx111wr9d60t2f2785l9fqvg72kp9xj6bvceddlp8t09w7b5akue2a75fx0u61tsy1yxvwhnmjd3yw0mftcye5r4m1xi9nt9cj502jpnlifc2ysj2kms1j4sgq77gymbqsegn4h2i9o45ip1l5jsslkyxcsmwa6ndi66tz51e68vnov0sun8hnej3j3n3zvlx6qw4thyd04enxr6ak8b5gomk1ba53og1nopyx2l1i9ky8mbrf48j3m83vx3egir9ze9daabfmrgl56oblmh58oa54p4qav56jn9pv7b2r23h6ht71hjsrmcbgq3skhx4nfryzxhgdwpv8ks0o8ew4ri8gmyos7r2v0njxkxa7vp3gq2rhz5t1d91yu6yfbcgymxniusy1is40hj3yhbaf7el2atjnmqbe6yuxazzo10vhv9urg2ve3fm54zi9g3zkxeklusbmhjhofp54djcsz0jf39ngetrigx12t7r22lchy0r3askrunmheskhoze7a38e29ycbjksambvv3spfzcchivpe0g0wppix4hcovxe7pryfyxb4on5ak63mxsw8oq03rdnn18het5nqxcd7xi1qlr99awwqqvw23fnucutlnibdgu3s1gcitxr1bqgflurm8z5ske80jdil99kdf0c2it941byarhzst1yvr555s04jwgyh18fcmp2o7thgc6f2hnrwfwr7yywz45n7jqy6dk65j9t25gwvuw21qj1n7lyibivjyeflw4u0sv79r6hx7mbde5mnumqy3b7ktbcwaoe3j2aexdsppd2wtoo0c1kid1wjk46oy9adq0vrtmmlz43pbh7vd6irehm2ijrsa8l7yyq007cl85j0r2wuwxfv9tqn1p6v5a2pth7zbnt7jvoe4tp6ym01wa7yx04j5b80fp4x52099vskt6i6vp6mn671dev037nt7g3p6myebbut489l2p4008mhp0m8yk9uq4s1skf97wgl9ej48v42epi4rsbj9a3i7385cb980lletrfck3i99ugfbqdxcc2tb4iospmgn1x6qmeriho82p0jddx1emcczx1box2jeu1d5d637l1nibaizo9fz333jj17ow94r773xtnyt7fkao5lvffca25tz2vkf8ngw6jlu82tym43kbvf48kwjv5s76qzk17or9g41kgumpcfa9hl5njksqfm7plc1wevtzpjjuc0r0o4zlu9d7czt3i4u3joqj1ulf583eap8yoexm59amcs8nok09fnck55i0ofic8hsn7918tk5qpebgmn6m6on5f9zti5qqx6x8w4ahza8wsd23qouhdbnf3xy85fdphz64i1es5ll7mw1fwgvo3hb7wlz3by1g2uatcx8gsjbm813m8nf79vec3lsl6g3269t0gn8t67ox9eulz7qlu8iamik9imyhr77vgzhsnw4nqi0dii4gd04skmxtzigpqhvohnv3q0y8fvk8pfarq9ke5jnoii5xo3j7018qa08arl4oxioooxz6mcutzuyuz7kt8sbyzm37gojkemc52danblm8ompok0o240ddp7gememyweftrtk408875u7avzyiplyiu9tdvloxj1da6i5zrtk4nwtpvhu0vebxlvnmpx2e6sz81n8eefws4hwb 00:08:37.510 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:37.510 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:37.510 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:37.510 02:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 { 00:08:37.510 "subsystems": [ 00:08:37.510 { 00:08:37.510 "subsystem": "bdev", 00:08:37.510 "config": [ 00:08:37.510 { 00:08:37.510 "params": { 00:08:37.510 "trtype": "pcie", 00:08:37.510 "traddr": "0000:00:10.0", 00:08:37.510 "name": "Nvme0" 00:08:37.510 }, 00:08:37.510 "method": "bdev_nvme_attach_controller" 00:08:37.510 }, 00:08:37.510 { 00:08:37.510 "method": "bdev_wait_for_examine" 00:08:37.510 } 00:08:37.510 ] 00:08:37.510 } 00:08:37.510 ] 00:08:37.510 } 00:08:37.510 [2024-07-13 02:57:43.976188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:37.510 [2024-07-13 02:57:43.976323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64569 ] 00:08:37.769 [2024-07-13 02:57:44.132922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.028 [2024-07-13 02:57:44.293569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.028 [2024-07-13 02:57:44.455758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.225  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:39.225 00:08:39.225 02:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:39.225 02:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:39.225 02:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:39.225 02:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:39.225 { 00:08:39.225 "subsystems": [ 00:08:39.225 { 00:08:39.225 "subsystem": "bdev", 00:08:39.225 "config": [ 00:08:39.225 { 00:08:39.225 "params": { 00:08:39.225 "trtype": "pcie", 00:08:39.225 "traddr": "0000:00:10.0", 00:08:39.225 "name": "Nvme0" 00:08:39.225 }, 00:08:39.225 "method": "bdev_nvme_attach_controller" 00:08:39.225 }, 00:08:39.225 { 00:08:39.225 "method": "bdev_wait_for_examine" 00:08:39.225 } 00:08:39.225 ] 00:08:39.225 } 00:08:39.225 ] 00:08:39.225 } 00:08:39.225 [2024-07-13 02:57:45.701931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:39.225 [2024-07-13 02:57:45.702088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64595 ] 00:08:39.484 [2024-07-13 02:57:45.860770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.743 [2024-07-13 02:57:46.026160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.743 [2024-07-13 02:57:46.179515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:40.940  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:40.940 00:08:40.940 02:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ xcuteflr3lxvyoli6riyzdnc8suhebab1gyjw4vfqr0eiv2wfqg17xmy504jd38ctd575fb9ds0jpgpwnqqp9hzyhwp2hpm42b2batgvs1j3kcrjr2bvw09vizewb2bkwikq4r9k7s7buf1ounip2776kfm0m9pmu5woczh4y4wrzpmub4ywaf9e59j3hwc2j6cba537zgvuq2150ov5hn7n2hb4b3z9b3lfo7oamg0al5ydh0dssknjom14myjzqic98egyayga6tdjaqjhanyugweus2fbsu85fbhh4xs6ma01vfynngmcbc63lqmh25pebkqcmhpyv4y8tmengj76s4zaad1uebzv6gdewj5ql566cgl80sjk5z6dfcce07kgs0266mee9nsorkiukjt06mza6snrn0e7hetdqwwsze3svjphuyeisq6zic5paazm9ev4g0ey8m1wlppscmogvds10uucjky2zuhal2njynnniwcpvlzupx2gcmzuk2125z63jm9ufpcbk1enmcf655l3t57qvgoek8r6gg2huzmryz8js8rjit0kbaefx4ozou1v64cbnobggzznemeeccnflqv4rxuj1rm142ny77f6nvb9ek2kwm3ngmwjwymje8fk70ai1gy00o92j83cb3awjus2fwc245v73mecdxzg20m75b9lnm95y1tuhyhrd9qthmpfecc7hm9t5w6uo5vdbdbqyoz3sw6up912xhz8y970igjco8kfeniutzd1w5qrlk0zu91e0fo93r6x41wwddey804yfry421n5gq2ayw11zqoay7d9jocui4bhehoedt3i7x1pyu3uu91i496bvud26owvbijtafq11s5rjkua2ui39234b0qs9fcg125vmndvv4vvrvkqmrude9i8x40u527hh2nia3ihdmb2pdxs7qr3uoa4tabqw1f0rfchgu1qs1uv1h54m9lrztey8so62frx956x37qd281fhpvh9ln3sshen1eefvircq6dnxhfhqm0suq8xy2dh35ovvy71odnmw31sjv6uecyr33pjqd5gos9r70hnlu18b3kbbpl5mx5wud0a0dc4cn1yqtbmd55giy40z889coxch0rpz8sc9w99alsjfafwnmy88odoccy28ulr560oomc9fddg6c7nicq10fn5ajhlvuhqmidnonj369r374ea6oi70fzsk2znq3vu76vvhpq7fg6xi755ewbyy22vm3wyrtvmnvc2h3ddbqyd1aridoh7117lw56ef8iljr7jp92gmvl7whb578qnur3br0iubwynnw8doz1vs9c7hlia83v408r7mt6f6s6ba2bzu2300zneyhmxgwqcgyppxcpmhljl6oz4r661b7uxkywh9k9eska0a41f1ha10gwcn14h64tupw7wzkahbpq2pa7wplfpkm3pks1l1d63vq8ouo9shtx3q31xdx2t2ue4oms8nermlzfn6of48epy00xy8l1aow0it5c88ucb11bou4pqtuk0veh4febz8wxs4hlvjfpwwf6t5edd1535c8snq71ptosoe2uglxaiq6gy6dwxav4s0l5wqqs0wa8t2jmtm01zfmfe0w8zv545v4e5rgesgooyngtm9wimursi0xxuo3adw3m3wl3vim8nn66qjq009xoi9m59llbadeobiivtzlt2uyf743i9twehfzkpos2gx7c6412tp2gg338q44l3050xpoliek8fb37uu5jsd1hzcj6s9dqjjdyia3wph8evs1spt3qv6p03owfffp02z4sb3m9xapmua8s5ggd5vw4bcg43rj1u7ed7fl2otu27v2xstf9xptqbs1bcvy4kadsuyvi9xumsdpeejsnffa26ezi8dk45kjvuqfkrcx41psqei7galcsrpbywgogbidqgjdlwu1uylboe7vdt953yn6xu36h41k5o9dsippmh6n64quov8sfptohwp6w41numm4mk2dapiatl7uztooreypz1z1pjf7780ob0dm8b9tcu08rrjvqpchx8oob22w7nk3riyz24hes7g2hr69lguk9elckkeqv6cifv5alnv08p6oy68ysmd7muuilf178f2x3f1wqxiq30l01452pm9n08qx26mrcm4omi4yzbff973uimf7nc8hyok2433n7l4ibubto0ye5gyouosjyb14fkglz96fkx03hcc80h6lvhvkz6594d2nwetkpy015tsbsjqjqc0kv0ryy2t86hq1awtg0397cvquuu7gx6tk189ktszcl9iivw54s65xotab6d112pcvf2zrprppar6u4vniz7bux6z3j6tyzer2crlrq6vsrs638s0fm5sk6p4bqzcmrf5digrhtx111wr9d60t2f2785l9fqvg72kp9xj6bvceddlp8t09w7b5akue2a75fx0u61tsy1yxvwhnmjd3yw0mftcye5r4m1xi9nt9cj502jpnlifc2ysj2kms1j4sgq77gymbqsegn4h2i9o45ip1l5jsslkyxcsmwa6ndi66tz51e68vnov0sun8hnej3j3n3zvlx6qw4thyd04enxr6ak8b5gomk1ba53og1nopyx2l1i9ky8mbrf48j3m83vx3egir9ze9daabfmrgl56oblmh58oa54p4qav56jn9pv7b2r23h6ht71hjsrmcbgq3skhx4nfryzxhgdwpv8ks0o8ew4ri8gmyos7r2v0njxkxa7vp3gq2rhz5t1d91yu6yfbcgymxniusy1is40hj3yhbaf7el2atjnmqbe6yuxazzo10vhv9urg2ve3fm54zi9g3zkxeklusbmhjhofp54djcsz0jf39ngetrigx12t7r22lchy0r3askrunmheskhoze7a38e29ycbjksambvv3spfzcchivpe0g0wppix4hcovxe7pryfyxb4on5ak63mxsw8oq03rdnn18het5nqxcd7xi1qlr99awwqqvw23fnucutlnibdgu3s1gcitxr1bqgflurm8z5ske80jdil99kdf0c2it941byarhzst1yvr555s04jwgyh18fcmp2o7thgc6f2hnrwfwr7yywz45n7jqy6dk65j9t25gwvuw21qj1n7lyibivjyeflw4u0sv79r6hx7mbde5mnumqy3b7ktbcwaoe3j2aexdsppd2wtoo0c1kid1wjk46oy9adq0vrtmmlz43pbh7vd6irehm2ijrsa8l7yyq007cl85j0r2wuwxfv9tqn1p6v5a2pth7zbnt7jvoe4tp6ym01wa7yx04j5b80fp4x52099vskt6i6vp6mn671dev037nt7g3p6myebbut489l2p4008mhp0m8yk9uq4s1skf97wgl9ej48v42epi4rsbj9a3i7385cb980lletrfck3i99ugfbqdxcc2tb4iospmgn1x6qmeriho82p0jddx1emcczx1box2jeu1d5d637l1nibaizo9fz333jj17ow94r773xtnyt7fkao5lvffca25tz2vkf8ngw6jlu82tym43kbvf48kwjv5s76qzk17or9g41kgumpcfa9hl5njksqfm7plc1wevtzpjjuc0r0o4zlu9d7czt3i4u3joqj1ulf583eap8yoexm59amcs8nok09fnck55i0ofic8hsn7918tk5qpebgmn6m6on5f9zti5qqx6x8w4ahza8wsd23qouhdbnf3xy85fdphz64i1es5ll7mw1fwgvo3hb7wlz3by1g2uatcx8gsjbm813m8nf79vec3lsl6g3269t0gn8t67ox9eulz7qlu8iamik9imyhr77vgzhsnw4nqi0dii4gd04skmxtzigpqhvohnv3q0y8fvk8pfarq9ke5jnoii5xo3j7018qa08arl4oxioooxz6mcutzuyuz7kt8sbyzm37gojkemc52danblm8ompok0o240ddp7gememyweftrtk408875u7avzyiplyiu9tdvloxj1da6i5zrtk4nwtpvhu0vebxlvnmpx2e6sz81n8eefws4hwb == \x\c\u\t\e\f\l\r\3\l\x\v\y\o\l\i\6\r\i\y\z\d\n\c\8\s\u\h\e\b\a\b\1\g\y\j\w\4\v\f\q\r\0\e\i\v\2\w\f\q\g\1\7\x\m\y\5\0\4\j\d\3\8\c\t\d\5\7\5\f\b\9\d\s\0\j\p\g\p\w\n\q\q\p\9\h\z\y\h\w\p\2\h\p\m\4\2\b\2\b\a\t\g\v\s\1\j\3\k\c\r\j\r\2\b\v\w\0\9\v\i\z\e\w\b\2\b\k\w\i\k\q\4\r\9\k\7\s\7\b\u\f\1\o\u\n\i\p\2\7\7\6\k\f\m\0\m\9\p\m\u\5\w\o\c\z\h\4\y\4\w\r\z\p\m\u\b\4\y\w\a\f\9\e\5\9\j\3\h\w\c\2\j\6\c\b\a\5\3\7\z\g\v\u\q\2\1\5\0\o\v\5\h\n\7\n\2\h\b\4\b\3\z\9\b\3\l\f\o\7\o\a\m\g\0\a\l\5\y\d\h\0\d\s\s\k\n\j\o\m\1\4\m\y\j\z\q\i\c\9\8\e\g\y\a\y\g\a\6\t\d\j\a\q\j\h\a\n\y\u\g\w\e\u\s\2\f\b\s\u\8\5\f\b\h\h\4\x\s\6\m\a\0\1\v\f\y\n\n\g\m\c\b\c\6\3\l\q\m\h\2\5\p\e\b\k\q\c\m\h\p\y\v\4\y\8\t\m\e\n\g\j\7\6\s\4\z\a\a\d\1\u\e\b\z\v\6\g\d\e\w\j\5\q\l\5\6\6\c\g\l\8\0\s\j\k\5\z\6\d\f\c\c\e\0\7\k\g\s\0\2\6\6\m\e\e\9\n\s\o\r\k\i\u\k\j\t\0\6\m\z\a\6\s\n\r\n\0\e\7\h\e\t\d\q\w\w\s\z\e\3\s\v\j\p\h\u\y\e\i\s\q\6\z\i\c\5\p\a\a\z\m\9\e\v\4\g\0\e\y\8\m\1\w\l\p\p\s\c\m\o\g\v\d\s\1\0\u\u\c\j\k\y\2\z\u\h\a\l\2\n\j\y\n\n\n\i\w\c\p\v\l\z\u\p\x\2\g\c\m\z\u\k\2\1\2\5\z\6\3\j\m\9\u\f\p\c\b\k\1\e\n\m\c\f\6\5\5\l\3\t\5\7\q\v\g\o\e\k\8\r\6\g\g\2\h\u\z\m\r\y\z\8\j\s\8\r\j\i\t\0\k\b\a\e\f\x\4\o\z\o\u\1\v\6\4\c\b\n\o\b\g\g\z\z\n\e\m\e\e\c\c\n\f\l\q\v\4\r\x\u\j\1\r\m\1\4\2\n\y\7\7\f\6\n\v\b\9\e\k\2\k\w\m\3\n\g\m\w\j\w\y\m\j\e\8\f\k\7\0\a\i\1\g\y\0\0\o\9\2\j\8\3\c\b\3\a\w\j\u\s\2\f\w\c\2\4\5\v\7\3\m\e\c\d\x\z\g\2\0\m\7\5\b\9\l\n\m\9\5\y\1\t\u\h\y\h\r\d\9\q\t\h\m\p\f\e\c\c\7\h\m\9\t\5\w\6\u\o\5\v\d\b\d\b\q\y\o\z\3\s\w\6\u\p\9\1\2\x\h\z\8\y\9\7\0\i\g\j\c\o\8\k\f\e\n\i\u\t\z\d\1\w\5\q\r\l\k\0\z\u\9\1\e\0\f\o\9\3\r\6\x\4\1\w\w\d\d\e\y\8\0\4\y\f\r\y\4\2\1\n\5\g\q\2\a\y\w\1\1\z\q\o\a\y\7\d\9\j\o\c\u\i\4\b\h\e\h\o\e\d\t\3\i\7\x\1\p\y\u\3\u\u\9\1\i\4\9\6\b\v\u\d\2\6\o\w\v\b\i\j\t\a\f\q\1\1\s\5\r\j\k\u\a\2\u\i\3\9\2\3\4\b\0\q\s\9\f\c\g\1\2\5\v\m\n\d\v\v\4\v\v\r\v\k\q\m\r\u\d\e\9\i\8\x\4\0\u\5\2\7\h\h\2\n\i\a\3\i\h\d\m\b\2\p\d\x\s\7\q\r\3\u\o\a\4\t\a\b\q\w\1\f\0\r\f\c\h\g\u\1\q\s\1\u\v\1\h\5\4\m\9\l\r\z\t\e\y\8\s\o\6\2\f\r\x\9\5\6\x\3\7\q\d\2\8\1\f\h\p\v\h\9\l\n\3\s\s\h\e\n\1\e\e\f\v\i\r\c\q\6\d\n\x\h\f\h\q\m\0\s\u\q\8\x\y\2\d\h\3\5\o\v\v\y\7\1\o\d\n\m\w\3\1\s\j\v\6\u\e\c\y\r\3\3\p\j\q\d\5\g\o\s\9\r\7\0\h\n\l\u\1\8\b\3\k\b\b\p\l\5\m\x\5\w\u\d\0\a\0\d\c\4\c\n\1\y\q\t\b\m\d\5\5\g\i\y\4\0\z\8\8\9\c\o\x\c\h\0\r\p\z\8\s\c\9\w\9\9\a\l\s\j\f\a\f\w\n\m\y\8\8\o\d\o\c\c\y\2\8\u\l\r\5\6\0\o\o\m\c\9\f\d\d\g\6\c\7\n\i\c\q\1\0\f\n\5\a\j\h\l\v\u\h\q\m\i\d\n\o\n\j\3\6\9\r\3\7\4\e\a\6\o\i\7\0\f\z\s\k\2\z\n\q\3\v\u\7\6\v\v\h\p\q\7\f\g\6\x\i\7\5\5\e\w\b\y\y\2\2\v\m\3\w\y\r\t\v\m\n\v\c\2\h\3\d\d\b\q\y\d\1\a\r\i\d\o\h\7\1\1\7\l\w\5\6\e\f\8\i\l\j\r\7\j\p\9\2\g\m\v\l\7\w\h\b\5\7\8\q\n\u\r\3\b\r\0\i\u\b\w\y\n\n\w\8\d\o\z\1\v\s\9\c\7\h\l\i\a\8\3\v\4\0\8\r\7\m\t\6\f\6\s\6\b\a\2\b\z\u\2\3\0\0\z\n\e\y\h\m\x\g\w\q\c\g\y\p\p\x\c\p\m\h\l\j\l\6\o\z\4\r\6\6\1\b\7\u\x\k\y\w\h\9\k\9\e\s\k\a\0\a\4\1\f\1\h\a\1\0\g\w\c\n\1\4\h\6\4\t\u\p\w\7\w\z\k\a\h\b\p\q\2\p\a\7\w\p\l\f\p\k\m\3\p\k\s\1\l\1\d\6\3\v\q\8\o\u\o\9\s\h\t\x\3\q\3\1\x\d\x\2\t\2\u\e\4\o\m\s\8\n\e\r\m\l\z\f\n\6\o\f\4\8\e\p\y\0\0\x\y\8\l\1\a\o\w\0\i\t\5\c\8\8\u\c\b\1\1\b\o\u\4\p\q\t\u\k\0\v\e\h\4\f\e\b\z\8\w\x\s\4\h\l\v\j\f\p\w\w\f\6\t\5\e\d\d\1\5\3\5\c\8\s\n\q\7\1\p\t\o\s\o\e\2\u\g\l\x\a\i\q\6\g\y\6\d\w\x\a\v\4\s\0\l\5\w\q\q\s\0\w\a\8\t\2\j\m\t\m\0\1\z\f\m\f\e\0\w\8\z\v\5\4\5\v\4\e\5\r\g\e\s\g\o\o\y\n\g\t\m\9\w\i\m\u\r\s\i\0\x\x\u\o\3\a\d\w\3\m\3\w\l\3\v\i\m\8\n\n\6\6\q\j\q\0\0\9\x\o\i\9\m\5\9\l\l\b\a\d\e\o\b\i\i\v\t\z\l\t\2\u\y\f\7\4\3\i\9\t\w\e\h\f\z\k\p\o\s\2\g\x\7\c\6\4\1\2\t\p\2\g\g\3\3\8\q\4\4\l\3\0\5\0\x\p\o\l\i\e\k\8\f\b\3\7\u\u\5\j\s\d\1\h\z\c\j\6\s\9\d\q\j\j\d\y\i\a\3\w\p\h\8\e\v\s\1\s\p\t\3\q\v\6\p\0\3\o\w\f\f\f\p\0\2\z\4\s\b\3\m\9\x\a\p\m\u\a\8\s\5\g\g\d\5\v\w\4\b\c\g\4\3\r\j\1\u\7\e\d\7\f\l\2\o\t\u\2\7\v\2\x\s\t\f\9\x\p\t\q\b\s\1\b\c\v\y\4\k\a\d\s\u\y\v\i\9\x\u\m\s\d\p\e\e\j\s\n\f\f\a\2\6\e\z\i\8\d\k\4\5\k\j\v\u\q\f\k\r\c\x\4\1\p\s\q\e\i\7\g\a\l\c\s\r\p\b\y\w\g\o\g\b\i\d\q\g\j\d\l\w\u\1\u\y\l\b\o\e\7\v\d\t\9\5\3\y\n\6\x\u\3\6\h\4\1\k\5\o\9\d\s\i\p\p\m\h\6\n\6\4\q\u\o\v\8\s\f\p\t\o\h\w\p\6\w\4\1\n\u\m\m\4\m\k\2\d\a\p\i\a\t\l\7\u\z\t\o\o\r\e\y\p\z\1\z\1\p\j\f\7\7\8\0\o\b\0\d\m\8\b\9\t\c\u\0\8\r\r\j\v\q\p\c\h\x\8\o\o\b\2\2\w\7\n\k\3\r\i\y\z\2\4\h\e\s\7\g\2\h\r\6\9\l\g\u\k\9\e\l\c\k\k\e\q\v\6\c\i\f\v\5\a\l\n\v\0\8\p\6\o\y\6\8\y\s\m\d\7\m\u\u\i\l\f\1\7\8\f\2\x\3\f\1\w\q\x\i\q\3\0\l\0\1\4\5\2\p\m\9\n\0\8\q\x\2\6\m\r\c\m\4\o\m\i\4\y\z\b\f\f\9\7\3\u\i\m\f\7\n\c\8\h\y\o\k\2\4\3\3\n\7\l\4\i\b\u\b\t\o\0\y\e\5\g\y\o\u\o\s\j\y\b\1\4\f\k\g\l\z\9\6\f\k\x\0\3\h\c\c\8\0\h\6\l\v\h\v\k\z\6\5\9\4\d\2\n\w\e\t\k\p\y\0\1\5\t\s\b\s\j\q\j\q\c\0\k\v\0\r\y\y\2\t\8\6\h\q\1\a\w\t\g\0\3\9\7\c\v\q\u\u\u\7\g\x\6\t\k\1\8\9\k\t\s\z\c\l\9\i\i\v\w\5\4\s\6\5\x\o\t\a\b\6\d\1\1\2\p\c\v\f\2\z\r\p\r\p\p\a\r\6\u\4\v\n\i\z\7\b\u\x\6\z\3\j\6\t\y\z\e\r\2\c\r\l\r\q\6\v\s\r\s\6\3\8\s\0\f\m\5\s\k\6\p\4\b\q\z\c\m\r\f\5\d\i\g\r\h\t\x\1\1\1\w\r\9\d\6\0\t\2\f\2\7\8\5\l\9\f\q\v\g\7\2\k\p\9\x\j\6\b\v\c\e\d\d\l\p\8\t\0\9\w\7\b\5\a\k\u\e\2\a\7\5\f\x\0\u\6\1\t\s\y\1\y\x\v\w\h\n\m\j\d\3\y\w\0\m\f\t\c\y\e\5\r\4\m\1\x\i\9\n\t\9\c\j\5\0\2\j\p\n\l\i\f\c\2\y\s\j\2\k\m\s\1\j\4\s\g\q\7\7\g\y\m\b\q\s\e\g\n\4\h\2\i\9\o\4\5\i\p\1\l\5\j\s\s\l\k\y\x\c\s\m\w\a\6\n\d\i\6\6\t\z\5\1\e\6\8\v\n\o\v\0\s\u\n\8\h\n\e\j\3\j\3\n\3\z\v\l\x\6\q\w\4\t\h\y\d\0\4\e\n\x\r\6\a\k\8\b\5\g\o\m\k\1\b\a\5\3\o\g\1\n\o\p\y\x\2\l\1\i\9\k\y\8\m\b\r\f\4\8\j\3\m\8\3\v\x\3\e\g\i\r\9\z\e\9\d\a\a\b\f\m\r\g\l\5\6\o\b\l\m\h\5\8\o\a\5\4\p\4\q\a\v\5\6\j\n\9\p\v\7\b\2\r\2\3\h\6\h\t\7\1\h\j\s\r\m\c\b\g\q\3\s\k\h\x\4\n\f\r\y\z\x\h\g\d\w\p\v\8\k\s\0\o\8\e\w\4\r\i\8\g\m\y\o\s\7\r\2\v\0\n\j\x\k\x\a\7\v\p\3\g\q\2\r\h\z\5\t\1\d\9\1\y\u\6\y\f\b\c\g\y\m\x\n\i\u\s\y\1\i\s\4\0\h\j\3\y\h\b\a\f\7\e\l\2\a\t\j\n\m\q\b\e\6\y\u\x\a\z\z\o\1\0\v\h\v\9\u\r\g\2\v\e\3\f\m\5\4\z\i\9\g\3\z\k\x\e\k\l\u\s\b\m\h\j\h\o\f\p\5\4\d\j\c\s\z\0\j\f\3\9\n\g\e\t\r\i\g\x\1\2\t\7\r\2\2\l\c\h\y\0\r\3\a\s\k\r\u\n\m\h\e\s\k\h\o\z\e\7\a\3\8\e\2\9\y\c\b\j\k\s\a\m\b\v\v\3\s\p\f\z\c\c\h\i\v\p\e\0\g\0\w\p\p\i\x\4\h\c\o\v\x\e\7\p\r\y\f\y\x\b\4\o\n\5\a\k\6\3\m\x\s\w\8\o\q\0\3\r\d\n\n\1\8\h\e\t\5\n\q\x\c\d\7\x\i\1\q\l\r\9\9\a\w\w\q\q\v\w\2\3\f\n\u\c\u\t\l\n\i\b\d\g\u\3\s\1\g\c\i\t\x\r\1\b\q\g\f\l\u\r\m\8\z\5\s\k\e\8\0\j\d\i\l\9\9\k\d\f\0\c\2\i\t\9\4\1\b\y\a\r\h\z\s\t\1\y\v\r\5\5\5\s\0\4\j\w\g\y\h\1\8\f\c\m\p\2\o\7\t\h\g\c\6\f\2\h\n\r\w\f\w\r\7\y\y\w\z\4\5\n\7\j\q\y\6\d\k\6\5\j\9\t\2\5\g\w\v\u\w\2\1\q\j\1\n\7\l\y\i\b\i\v\j\y\e\f\l\w\4\u\0\s\v\7\9\r\6\h\x\7\m\b\d\e\5\m\n\u\m\q\y\3\b\7\k\t\b\c\w\a\o\e\3\j\2\a\e\x\d\s\p\p\d\2\w\t\o\o\0\c\1\k\i\d\1\w\j\k\4\6\o\y\9\a\d\q\0\v\r\t\m\m\l\z\4\3\p\b\h\7\v\d\6\i\r\e\h\m\2\i\j\r\s\a\8\l\7\y\y\q\0\0\7\c\l\8\5\j\0\r\2\w\u\w\x\f\v\9\t\q\n\1\p\6\v\5\a\2\p\t\h\7\z\b\n\t\7\j\v\o\e\4\t\p\6\y\m\0\1\w\a\7\y\x\0\4\j\5\b\8\0\f\p\4\x\5\2\0\9\9\v\s\k\t\6\i\6\v\p\6\m\n\6\7\1\d\e\v\0\3\7\n\t\7\g\3\p\6\m\y\e\b\b\u\t\4\8\9\l\2\p\4\0\0\8\m\h\p\0\m\8\y\k\9\u\q\4\s\1\s\k\f\9\7\w\g\l\9\e\j\4\8\v\4\2\e\p\i\4\r\s\b\j\9\a\3\i\7\3\8\5\c\b\9\8\0\l\l\e\t\r\f\c\k\3\i\9\9\u\g\f\b\q\d\x\c\c\2\t\b\4\i\o\s\p\m\g\n\1\x\6\q\m\e\r\i\h\o\8\2\p\0\j\d\d\x\1\e\m\c\c\z\x\1\b\o\x\2\j\e\u\1\d\5\d\6\3\7\l\1\n\i\b\a\i\z\o\9\f\z\3\3\3\j\j\1\7\o\w\9\4\r\7\7\3\x\t\n\y\t\7\f\k\a\o\5\l\v\f\f\c\a\2\5\t\z\2\v\k\f\8\n\g\w\6\j\l\u\8\2\t\y\m\4\3\k\b\v\f\4\8\k\w\j\v\5\s\7\6\q\z\k\1\7\o\r\9\g\4\1\k\g\u\m\p\c\f\a\9\h\l\5\n\j\k\s\q\f\m\7\p\l\c\1\w\e\v\t\z\p\j\j\u\c\0\r\0\o\4\z\l\u\9\d\7\c\z\t\3\i\4\u\3\j\o\q\j\1\u\l\f\5\8\3\e\a\p\8\y\o\e\x\m\5\9\a\m\c\s\8\n\o\k\0\9\f\n\c\k\5\5\i\0\o\f\i\c\8\h\s\n\7\9\1\8\t\k\5\q\p\e\b\g\m\n\6\m\6\o\n\5\f\9\z\t\i\5\q\q\x\6\x\8\w\4\a\h\z\a\8\w\s\d\2\3\q\o\u\h\d\b\n\f\3\x\y\8\5\f\d\p\h\z\6\4\i\1\e\s\5\l\l\7\m\w\1\f\w\g\v\o\3\h\b\7\w\l\z\3\b\y\1\g\2\u\a\t\c\x\8\g\s\j\b\m\8\1\3\m\8\n\f\7\9\v\e\c\3\l\s\l\6\g\3\2\6\9\t\0\g\n\8\t\6\7\o\x\9\e\u\l\z\7\q\l\u\8\i\a\m\i\k\9\i\m\y\h\r\7\7\v\g\z\h\s\n\w\4\n\q\i\0\d\i\i\4\g\d\0\4\s\k\m\x\t\z\i\g\p\q\h\v\o\h\n\v\3\q\0\y\8\f\v\k\8\p\f\a\r\q\9\k\e\5\j\n\o\i\i\5\x\o\3\j\7\0\1\8\q\a\0\8\a\r\l\4\o\x\i\o\o\o\x\z\6\m\c\u\t\z\u\y\u\z\7\k\t\8\s\b\y\z\m\3\7\g\o\j\k\e\m\c\5\2\d\a\n\b\l\m\8\o\m\p\o\k\0\o\2\4\0\d\d\p\7\g\e\m\e\m\y\w\e\f\t\r\t\k\4\0\8\8\7\5\u\7\a\v\z\y\i\p\l\y\i\u\9\t\d\v\l\o\x\j\1\d\a\6\i\5\z\r\t\k\4\n\w\t\p\v\h\u\0\v\e\b\x\l\v\n\m\p\x\2\e\6\s\z\8\1\n\8\e\e\f\w\s\4\h\w\b ]] 00:08:40.941 00:08:40.941 real 0m3.326s 00:08:40.941 user 0m2.840s 00:08:40.941 sys 0m1.451s 00:08:40.941 ************************************ 00:08:40.941 END TEST dd_rw_offset 00:08:40.941 ************************************ 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:40.941 02:57:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:40.941 { 00:08:40.941 "subsystems": [ 00:08:40.941 { 00:08:40.941 "subsystem": "bdev", 00:08:40.941 "config": [ 00:08:40.941 { 00:08:40.941 "params": { 00:08:40.941 "trtype": "pcie", 00:08:40.941 "traddr": "0000:00:10.0", 00:08:40.941 "name": "Nvme0" 00:08:40.941 }, 00:08:40.941 "method": "bdev_nvme_attach_controller" 00:08:40.941 }, 00:08:40.941 { 00:08:40.941 "method": "bdev_wait_for_examine" 00:08:40.941 } 00:08:40.941 ] 00:08:40.941 } 00:08:40.941 ] 00:08:40.941 } 00:08:40.941 [2024-07-13 02:57:47.316407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:40.941 [2024-07-13 02:57:47.316569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64636 ] 00:08:41.201 [2024-07-13 02:57:47.485457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.201 [2024-07-13 02:57:47.641583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.460 [2024-07-13 02:57:47.805878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.655  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:42.655 00:08:42.655 02:57:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.655 00:08:42.655 real 0m39.651s 00:08:42.655 user 0m33.518s 00:08:42.655 sys 0m15.953s 00:08:42.655 02:57:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.655 ************************************ 00:08:42.655 END TEST spdk_dd_basic_rw 00:08:42.655 02:57:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 ************************************ 00:08:42.655 02:57:49 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:42.655 02:57:49 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:42.914 02:57:49 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.914 02:57:49 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.914 02:57:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:42.914 ************************************ 00:08:42.914 START TEST spdk_dd_posix 00:08:42.914 ************************************ 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:42.914 * Looking for test storage... 00:08:42.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.914 02:57:49 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:42.915 * First test run, liburing in use 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 ************************************ 00:08:42.915 START TEST dd_flag_append 00:08:42.915 ************************************ 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=z77jych665pa2y6kdiqlw5b7jcjxvybp 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=026ixcwcwakhx9isxtxhs1d7k2vkt0w3 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s z77jych665pa2y6kdiqlw5b7jcjxvybp 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 026ixcwcwakhx9isxtxhs1d7k2vkt0w3 00:08:42.915 02:57:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:42.915 [2024-07-13 02:57:49.371007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:42.915 [2024-07-13 02:57:49.371165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64712 ] 00:08:43.174 [2024-07-13 02:57:49.541198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.432 [2024-07-13 02:57:49.703056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.432 [2024-07-13 02:57:49.855637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.627  Copying: 32/32 [B] (average 31 kBps) 00:08:44.627 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 026ixcwcwakhx9isxtxhs1d7k2vkt0w3z77jych665pa2y6kdiqlw5b7jcjxvybp == \0\2\6\i\x\c\w\c\w\a\k\h\x\9\i\s\x\t\x\h\s\1\d\7\k\2\v\k\t\0\w\3\z\7\7\j\y\c\h\6\6\5\p\a\2\y\6\k\d\i\q\l\w\5\b\7\j\c\j\x\v\y\b\p ]] 00:08:44.627 00:08:44.627 real 0m1.673s 00:08:44.627 user 0m1.380s 00:08:44.627 sys 0m0.799s 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.627 ************************************ 00:08:44.627 END TEST dd_flag_append 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 ************************************ 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 ************************************ 00:08:44.627 START TEST dd_flag_directory 00:08:44.627 ************************************ 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.627 02:57:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.627 [2024-07-13 02:57:51.089007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:44.628 [2024-07-13 02:57:51.089169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64753 ] 00:08:44.886 [2024-07-13 02:57:51.255880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.145 [2024-07-13 02:57:51.416923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.145 [2024-07-13 02:57:51.573174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.403 [2024-07-13 02:57:51.651831] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:45.403 [2024-07-13 02:57:51.651926] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:45.403 [2024-07-13 02:57:51.651949] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.969 [2024-07-13 02:57:52.233202] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:46.228 02:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.489 [2024-07-13 02:57:52.723393] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.489 [2024-07-13 02:57:52.723577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64780 ] 00:08:46.489 [2024-07-13 02:57:52.892555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.749 [2024-07-13 02:57:53.043869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.749 [2024-07-13 02:57:53.195584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.007 [2024-07-13 02:57:53.269043] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:47.007 [2024-07-13 02:57:53.269110] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:47.007 [2024-07-13 02:57:53.269133] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.573 [2024-07-13 02:57:53.845020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.832 00:08:47.832 real 0m3.226s 00:08:47.832 user 0m2.624s 00:08:47.832 sys 0m0.382s 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:47.832 ************************************ 00:08:47.832 END TEST dd_flag_directory 00:08:47.832 ************************************ 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:47.832 ************************************ 00:08:47.832 START TEST dd_flag_nofollow 00:08:47.832 ************************************ 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:47.832 02:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.129 [2024-07-13 02:57:54.382782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:48.129 [2024-07-13 02:57:54.382996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64815 ] 00:08:48.129 [2024-07-13 02:57:54.552852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.408 [2024-07-13 02:57:54.729135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.667 [2024-07-13 02:57:54.913041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.667 [2024-07-13 02:57:55.006027] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:48.667 [2024-07-13 02:57:55.006103] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:48.667 [2024-07-13 02:57:55.006144] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.235 [2024-07-13 02:57:55.705410] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.803 02:57:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:49.803 [2024-07-13 02:57:56.244530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:49.803 [2024-07-13 02:57:56.244729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64842 ] 00:08:50.062 [2024-07-13 02:57:56.419923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.321 [2024-07-13 02:57:56.662680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.579 [2024-07-13 02:57:56.868124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.579 [2024-07-13 02:57:56.960297] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:50.579 [2024-07-13 02:57:56.960383] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:50.579 [2024-07-13 02:57:56.960406] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.516 [2024-07-13 02:57:57.648242] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:51.774 02:57:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.775 [2024-07-13 02:57:58.179242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:51.775 [2024-07-13 02:57:58.179452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64867 ] 00:08:52.034 [2024-07-13 02:57:58.348086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.292 [2024-07-13 02:57:58.538047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.292 [2024-07-13 02:57:58.719861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.486  Copying: 512/512 [B] (average 500 kBps) 00:08:53.486 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ do8gu74trqvxrp4isai5f0ugs231jmi6cdodchr7za7lgmffeshivyd3fu7yyjea25v9k1fav4djj1f7as6qjbctpm0o9ixhl8pds4opvg17rslri9amhuvqe7di4btnkejk1dk6on0739ru74qs675doee4evyp5s8is4xsdg8h5g6mhwt92j8ekq449uv1hxq33vrfax5hvluyzgwc5dhnmscejx20ic9i9pdaie1ach3l7p4pyln9q5ezvqz5jukltvdzmgjglispn8b6ria5tj7jzym1fxv8innqprbfp1gza1dcin2fxo70ctmfhn41u3s2w6lvagg4nimxcp9lwalkw0qw5hfj4uygu8nsl9vw47l2oin997ny84pb3k54tkaq128yo4kgnoh11kq7s6vmsj1y8srbsj1o6djjjtdbb13d8851qozocumpj9eddp29qq6cs2itabh72cmt69fdmn9k7fn9xzlbw0s8sb5xgw7kqnuife2d9inh == \d\o\8\g\u\7\4\t\r\q\v\x\r\p\4\i\s\a\i\5\f\0\u\g\s\2\3\1\j\m\i\6\c\d\o\d\c\h\r\7\z\a\7\l\g\m\f\f\e\s\h\i\v\y\d\3\f\u\7\y\y\j\e\a\2\5\v\9\k\1\f\a\v\4\d\j\j\1\f\7\a\s\6\q\j\b\c\t\p\m\0\o\9\i\x\h\l\8\p\d\s\4\o\p\v\g\1\7\r\s\l\r\i\9\a\m\h\u\v\q\e\7\d\i\4\b\t\n\k\e\j\k\1\d\k\6\o\n\0\7\3\9\r\u\7\4\q\s\6\7\5\d\o\e\e\4\e\v\y\p\5\s\8\i\s\4\x\s\d\g\8\h\5\g\6\m\h\w\t\9\2\j\8\e\k\q\4\4\9\u\v\1\h\x\q\3\3\v\r\f\a\x\5\h\v\l\u\y\z\g\w\c\5\d\h\n\m\s\c\e\j\x\2\0\i\c\9\i\9\p\d\a\i\e\1\a\c\h\3\l\7\p\4\p\y\l\n\9\q\5\e\z\v\q\z\5\j\u\k\l\t\v\d\z\m\g\j\g\l\i\s\p\n\8\b\6\r\i\a\5\t\j\7\j\z\y\m\1\f\x\v\8\i\n\n\q\p\r\b\f\p\1\g\z\a\1\d\c\i\n\2\f\x\o\7\0\c\t\m\f\h\n\4\1\u\3\s\2\w\6\l\v\a\g\g\4\n\i\m\x\c\p\9\l\w\a\l\k\w\0\q\w\5\h\f\j\4\u\y\g\u\8\n\s\l\9\v\w\4\7\l\2\o\i\n\9\9\7\n\y\8\4\p\b\3\k\5\4\t\k\a\q\1\2\8\y\o\4\k\g\n\o\h\1\1\k\q\7\s\6\v\m\s\j\1\y\8\s\r\b\s\j\1\o\6\d\j\j\j\t\d\b\b\1\3\d\8\8\5\1\q\o\z\o\c\u\m\p\j\9\e\d\d\p\2\9\q\q\6\c\s\2\i\t\a\b\h\7\2\c\m\t\6\9\f\d\m\n\9\k\7\f\n\9\x\z\l\b\w\0\s\8\s\b\5\x\g\w\7\k\q\n\u\i\f\e\2\d\9\i\n\h ]] 00:08:53.486 00:08:53.486 real 0m5.649s 00:08:53.486 user 0m4.676s 00:08:53.486 sys 0m1.315s 00:08:53.486 ************************************ 00:08:53.486 END TEST dd_flag_nofollow 00:08:53.486 ************************************ 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 ************************************ 00:08:53.486 START TEST dd_flag_noatime 00:08:53.486 ************************************ 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:53.486 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:53.750 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.750 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720839478 00:08:53.750 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.750 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720839479 00:08:53.750 02:57:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:54.683 02:58:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.683 [2024-07-13 02:58:01.098715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:54.683 [2024-07-13 02:58:01.098911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64927 ] 00:08:54.941 [2024-07-13 02:58:01.268093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.941 [2024-07-13 02:58:01.418905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.198 [2024-07-13 02:58:01.570962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.172  Copying: 512/512 [B] (average 500 kBps) 00:08:56.172 00:08:56.172 02:58:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:56.172 02:58:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720839478 )) 00:08:56.172 02:58:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.172 02:58:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720839479 )) 00:08:56.172 02:58:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.430 [2024-07-13 02:58:02.763824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:56.430 [2024-07-13 02:58:02.764030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64953 ] 00:08:56.687 [2024-07-13 02:58:02.939973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.688 [2024-07-13 02:58:03.173716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.946 [2024-07-13 02:58:03.341959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.323  Copying: 512/512 [B] (average 500 kBps) 00:08:58.323 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720839483 )) 00:08:58.323 00:08:58.323 real 0m4.473s 00:08:58.323 user 0m2.854s 00:08:58.323 sys 0m1.635s 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.323 ************************************ 00:08:58.323 END TEST dd_flag_noatime 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:58.323 ************************************ 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:58.323 ************************************ 00:08:58.323 START TEST dd_flags_misc 00:08:58.323 ************************************ 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.323 02:58:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.323 [2024-07-13 02:58:04.611609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:58.323 [2024-07-13 02:58:04.611778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64993 ] 00:08:58.323 [2024-07-13 02:58:04.785018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.582 [2024-07-13 02:58:05.008651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.841 [2024-07-13 02:58:05.170322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.778  Copying: 512/512 [B] (average 500 kBps) 00:08:59.778 00:08:59.778 02:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rt1x963rcj98163ibigfo2cewb8jb67nmvu85tww3m7lzj8phuhdinewqxjpwwwnp39vffq0ayprhrst14ufcgqz73doombroyzq8hxj9e7rlp8m04newnxfkwt6ppuafigwo7x1kvx34598i5z3vuaaj006pwxjcjmbw9r2puwlkpoj71qu4qlu1z82t294p84l7ah8nirw59ujrcemw9ieelu2vbl6dam07ju1dfwb9u7hkahv24dc3hw2h150hndph68hqhapglcqnuzt1q3qxfdo1b9wxfddh0astosbar4atd1zwxdoccofj1axfv1baotrvyy29k42gby3v91se6gxy4u5vp1gzlp358n9oqjsqh836cfb62auibakx7gzg9uaimn4dilxx02i5p33c9o81ko2705cncv6t7nto12nd5opg2tzzszfcfjbl9plxd1frwskwqrrduf8njvq1bvcalvrlz5z1xl99gp6a6aoi2l1gybjk90mv58s == \r\t\1\x\9\6\3\r\c\j\9\8\1\6\3\i\b\i\g\f\o\2\c\e\w\b\8\j\b\6\7\n\m\v\u\8\5\t\w\w\3\m\7\l\z\j\8\p\h\u\h\d\i\n\e\w\q\x\j\p\w\w\w\n\p\3\9\v\f\f\q\0\a\y\p\r\h\r\s\t\1\4\u\f\c\g\q\z\7\3\d\o\o\m\b\r\o\y\z\q\8\h\x\j\9\e\7\r\l\p\8\m\0\4\n\e\w\n\x\f\k\w\t\6\p\p\u\a\f\i\g\w\o\7\x\1\k\v\x\3\4\5\9\8\i\5\z\3\v\u\a\a\j\0\0\6\p\w\x\j\c\j\m\b\w\9\r\2\p\u\w\l\k\p\o\j\7\1\q\u\4\q\l\u\1\z\8\2\t\2\9\4\p\8\4\l\7\a\h\8\n\i\r\w\5\9\u\j\r\c\e\m\w\9\i\e\e\l\u\2\v\b\l\6\d\a\m\0\7\j\u\1\d\f\w\b\9\u\7\h\k\a\h\v\2\4\d\c\3\h\w\2\h\1\5\0\h\n\d\p\h\6\8\h\q\h\a\p\g\l\c\q\n\u\z\t\1\q\3\q\x\f\d\o\1\b\9\w\x\f\d\d\h\0\a\s\t\o\s\b\a\r\4\a\t\d\1\z\w\x\d\o\c\c\o\f\j\1\a\x\f\v\1\b\a\o\t\r\v\y\y\2\9\k\4\2\g\b\y\3\v\9\1\s\e\6\g\x\y\4\u\5\v\p\1\g\z\l\p\3\5\8\n\9\o\q\j\s\q\h\8\3\6\c\f\b\6\2\a\u\i\b\a\k\x\7\g\z\g\9\u\a\i\m\n\4\d\i\l\x\x\0\2\i\5\p\3\3\c\9\o\8\1\k\o\2\7\0\5\c\n\c\v\6\t\7\n\t\o\1\2\n\d\5\o\p\g\2\t\z\z\s\z\f\c\f\j\b\l\9\p\l\x\d\1\f\r\w\s\k\w\q\r\r\d\u\f\8\n\j\v\q\1\b\v\c\a\l\v\r\l\z\5\z\1\x\l\9\9\g\p\6\a\6\a\o\i\2\l\1\g\y\b\j\k\9\0\m\v\5\8\s ]] 00:08:59.778 02:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.778 02:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:00.037 [2024-07-13 02:58:06.362843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:00.037 [2024-07-13 02:58:06.363030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65019 ] 00:09:00.037 [2024-07-13 02:58:06.528530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.296 [2024-07-13 02:58:06.694145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.553 [2024-07-13 02:58:06.857123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.489  Copying: 512/512 [B] (average 500 kBps) 00:09:01.489 00:09:01.489 02:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rt1x963rcj98163ibigfo2cewb8jb67nmvu85tww3m7lzj8phuhdinewqxjpwwwnp39vffq0ayprhrst14ufcgqz73doombroyzq8hxj9e7rlp8m04newnxfkwt6ppuafigwo7x1kvx34598i5z3vuaaj006pwxjcjmbw9r2puwlkpoj71qu4qlu1z82t294p84l7ah8nirw59ujrcemw9ieelu2vbl6dam07ju1dfwb9u7hkahv24dc3hw2h150hndph68hqhapglcqnuzt1q3qxfdo1b9wxfddh0astosbar4atd1zwxdoccofj1axfv1baotrvyy29k42gby3v91se6gxy4u5vp1gzlp358n9oqjsqh836cfb62auibakx7gzg9uaimn4dilxx02i5p33c9o81ko2705cncv6t7nto12nd5opg2tzzszfcfjbl9plxd1frwskwqrrduf8njvq1bvcalvrlz5z1xl99gp6a6aoi2l1gybjk90mv58s == \r\t\1\x\9\6\3\r\c\j\9\8\1\6\3\i\b\i\g\f\o\2\c\e\w\b\8\j\b\6\7\n\m\v\u\8\5\t\w\w\3\m\7\l\z\j\8\p\h\u\h\d\i\n\e\w\q\x\j\p\w\w\w\n\p\3\9\v\f\f\q\0\a\y\p\r\h\r\s\t\1\4\u\f\c\g\q\z\7\3\d\o\o\m\b\r\o\y\z\q\8\h\x\j\9\e\7\r\l\p\8\m\0\4\n\e\w\n\x\f\k\w\t\6\p\p\u\a\f\i\g\w\o\7\x\1\k\v\x\3\4\5\9\8\i\5\z\3\v\u\a\a\j\0\0\6\p\w\x\j\c\j\m\b\w\9\r\2\p\u\w\l\k\p\o\j\7\1\q\u\4\q\l\u\1\z\8\2\t\2\9\4\p\8\4\l\7\a\h\8\n\i\r\w\5\9\u\j\r\c\e\m\w\9\i\e\e\l\u\2\v\b\l\6\d\a\m\0\7\j\u\1\d\f\w\b\9\u\7\h\k\a\h\v\2\4\d\c\3\h\w\2\h\1\5\0\h\n\d\p\h\6\8\h\q\h\a\p\g\l\c\q\n\u\z\t\1\q\3\q\x\f\d\o\1\b\9\w\x\f\d\d\h\0\a\s\t\o\s\b\a\r\4\a\t\d\1\z\w\x\d\o\c\c\o\f\j\1\a\x\f\v\1\b\a\o\t\r\v\y\y\2\9\k\4\2\g\b\y\3\v\9\1\s\e\6\g\x\y\4\u\5\v\p\1\g\z\l\p\3\5\8\n\9\o\q\j\s\q\h\8\3\6\c\f\b\6\2\a\u\i\b\a\k\x\7\g\z\g\9\u\a\i\m\n\4\d\i\l\x\x\0\2\i\5\p\3\3\c\9\o\8\1\k\o\2\7\0\5\c\n\c\v\6\t\7\n\t\o\1\2\n\d\5\o\p\g\2\t\z\z\s\z\f\c\f\j\b\l\9\p\l\x\d\1\f\r\w\s\k\w\q\r\r\d\u\f\8\n\j\v\q\1\b\v\c\a\l\v\r\l\z\5\z\1\x\l\9\9\g\p\6\a\6\a\o\i\2\l\1\g\y\b\j\k\9\0\m\v\5\8\s ]] 00:09:01.489 02:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.489 02:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:01.748 [2024-07-13 02:58:08.057204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:01.748 [2024-07-13 02:58:08.057383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65042 ] 00:09:01.748 [2024-07-13 02:58:08.228598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.008 [2024-07-13 02:58:08.399347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.266 [2024-07-13 02:58:08.557044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.202  Copying: 512/512 [B] (average 125 kBps) 00:09:03.202 00:09:03.202 02:58:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rt1x963rcj98163ibigfo2cewb8jb67nmvu85tww3m7lzj8phuhdinewqxjpwwwnp39vffq0ayprhrst14ufcgqz73doombroyzq8hxj9e7rlp8m04newnxfkwt6ppuafigwo7x1kvx34598i5z3vuaaj006pwxjcjmbw9r2puwlkpoj71qu4qlu1z82t294p84l7ah8nirw59ujrcemw9ieelu2vbl6dam07ju1dfwb9u7hkahv24dc3hw2h150hndph68hqhapglcqnuzt1q3qxfdo1b9wxfddh0astosbar4atd1zwxdoccofj1axfv1baotrvyy29k42gby3v91se6gxy4u5vp1gzlp358n9oqjsqh836cfb62auibakx7gzg9uaimn4dilxx02i5p33c9o81ko2705cncv6t7nto12nd5opg2tzzszfcfjbl9plxd1frwskwqrrduf8njvq1bvcalvrlz5z1xl99gp6a6aoi2l1gybjk90mv58s == \r\t\1\x\9\6\3\r\c\j\9\8\1\6\3\i\b\i\g\f\o\2\c\e\w\b\8\j\b\6\7\n\m\v\u\8\5\t\w\w\3\m\7\l\z\j\8\p\h\u\h\d\i\n\e\w\q\x\j\p\w\w\w\n\p\3\9\v\f\f\q\0\a\y\p\r\h\r\s\t\1\4\u\f\c\g\q\z\7\3\d\o\o\m\b\r\o\y\z\q\8\h\x\j\9\e\7\r\l\p\8\m\0\4\n\e\w\n\x\f\k\w\t\6\p\p\u\a\f\i\g\w\o\7\x\1\k\v\x\3\4\5\9\8\i\5\z\3\v\u\a\a\j\0\0\6\p\w\x\j\c\j\m\b\w\9\r\2\p\u\w\l\k\p\o\j\7\1\q\u\4\q\l\u\1\z\8\2\t\2\9\4\p\8\4\l\7\a\h\8\n\i\r\w\5\9\u\j\r\c\e\m\w\9\i\e\e\l\u\2\v\b\l\6\d\a\m\0\7\j\u\1\d\f\w\b\9\u\7\h\k\a\h\v\2\4\d\c\3\h\w\2\h\1\5\0\h\n\d\p\h\6\8\h\q\h\a\p\g\l\c\q\n\u\z\t\1\q\3\q\x\f\d\o\1\b\9\w\x\f\d\d\h\0\a\s\t\o\s\b\a\r\4\a\t\d\1\z\w\x\d\o\c\c\o\f\j\1\a\x\f\v\1\b\a\o\t\r\v\y\y\2\9\k\4\2\g\b\y\3\v\9\1\s\e\6\g\x\y\4\u\5\v\p\1\g\z\l\p\3\5\8\n\9\o\q\j\s\q\h\8\3\6\c\f\b\6\2\a\u\i\b\a\k\x\7\g\z\g\9\u\a\i\m\n\4\d\i\l\x\x\0\2\i\5\p\3\3\c\9\o\8\1\k\o\2\7\0\5\c\n\c\v\6\t\7\n\t\o\1\2\n\d\5\o\p\g\2\t\z\z\s\z\f\c\f\j\b\l\9\p\l\x\d\1\f\r\w\s\k\w\q\r\r\d\u\f\8\n\j\v\q\1\b\v\c\a\l\v\r\l\z\5\z\1\x\l\9\9\g\p\6\a\6\a\o\i\2\l\1\g\y\b\j\k\9\0\m\v\5\8\s ]] 00:09:03.202 02:58:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:03.202 02:58:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:03.461 [2024-07-13 02:58:09.726346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.461 [2024-07-13 02:58:09.726512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65063 ] 00:09:03.461 [2024-07-13 02:58:09.895852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.720 [2024-07-13 02:58:10.060318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.720 [2024-07-13 02:58:10.206020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.352  Copying: 512/512 [B] (average 250 kBps) 00:09:05.353 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ rt1x963rcj98163ibigfo2cewb8jb67nmvu85tww3m7lzj8phuhdinewqxjpwwwnp39vffq0ayprhrst14ufcgqz73doombroyzq8hxj9e7rlp8m04newnxfkwt6ppuafigwo7x1kvx34598i5z3vuaaj006pwxjcjmbw9r2puwlkpoj71qu4qlu1z82t294p84l7ah8nirw59ujrcemw9ieelu2vbl6dam07ju1dfwb9u7hkahv24dc3hw2h150hndph68hqhapglcqnuzt1q3qxfdo1b9wxfddh0astosbar4atd1zwxdoccofj1axfv1baotrvyy29k42gby3v91se6gxy4u5vp1gzlp358n9oqjsqh836cfb62auibakx7gzg9uaimn4dilxx02i5p33c9o81ko2705cncv6t7nto12nd5opg2tzzszfcfjbl9plxd1frwskwqrrduf8njvq1bvcalvrlz5z1xl99gp6a6aoi2l1gybjk90mv58s == \r\t\1\x\9\6\3\r\c\j\9\8\1\6\3\i\b\i\g\f\o\2\c\e\w\b\8\j\b\6\7\n\m\v\u\8\5\t\w\w\3\m\7\l\z\j\8\p\h\u\h\d\i\n\e\w\q\x\j\p\w\w\w\n\p\3\9\v\f\f\q\0\a\y\p\r\h\r\s\t\1\4\u\f\c\g\q\z\7\3\d\o\o\m\b\r\o\y\z\q\8\h\x\j\9\e\7\r\l\p\8\m\0\4\n\e\w\n\x\f\k\w\t\6\p\p\u\a\f\i\g\w\o\7\x\1\k\v\x\3\4\5\9\8\i\5\z\3\v\u\a\a\j\0\0\6\p\w\x\j\c\j\m\b\w\9\r\2\p\u\w\l\k\p\o\j\7\1\q\u\4\q\l\u\1\z\8\2\t\2\9\4\p\8\4\l\7\a\h\8\n\i\r\w\5\9\u\j\r\c\e\m\w\9\i\e\e\l\u\2\v\b\l\6\d\a\m\0\7\j\u\1\d\f\w\b\9\u\7\h\k\a\h\v\2\4\d\c\3\h\w\2\h\1\5\0\h\n\d\p\h\6\8\h\q\h\a\p\g\l\c\q\n\u\z\t\1\q\3\q\x\f\d\o\1\b\9\w\x\f\d\d\h\0\a\s\t\o\s\b\a\r\4\a\t\d\1\z\w\x\d\o\c\c\o\f\j\1\a\x\f\v\1\b\a\o\t\r\v\y\y\2\9\k\4\2\g\b\y\3\v\9\1\s\e\6\g\x\y\4\u\5\v\p\1\g\z\l\p\3\5\8\n\9\o\q\j\s\q\h\8\3\6\c\f\b\6\2\a\u\i\b\a\k\x\7\g\z\g\9\u\a\i\m\n\4\d\i\l\x\x\0\2\i\5\p\3\3\c\9\o\8\1\k\o\2\7\0\5\c\n\c\v\6\t\7\n\t\o\1\2\n\d\5\o\p\g\2\t\z\z\s\z\f\c\f\j\b\l\9\p\l\x\d\1\f\r\w\s\k\w\q\r\r\d\u\f\8\n\j\v\q\1\b\v\c\a\l\v\r\l\z\5\z\1\x\l\9\9\g\p\6\a\6\a\o\i\2\l\1\g\y\b\j\k\9\0\m\v\5\8\s ]] 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:05.353 02:58:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:05.353 [2024-07-13 02:58:11.559862] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:05.353 [2024-07-13 02:58:11.560044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65085 ] 00:09:05.353 [2024-07-13 02:58:11.736360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.616 [2024-07-13 02:58:11.964148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.883 [2024-07-13 02:58:12.184700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.819  Copying: 512/512 [B] (average 500 kBps) 00:09:06.819 00:09:06.819 02:58:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2etajy9e8zi4giocgbsympu2d2tkjw133s4mzhqa1bs5g4rzmq96x4wwnn67dmm2l5qzzlebvpkib6wxkbxo7dslgast3ixotbz8ltal5ay5i08soxc6bo2n4h1en27kpoxc3lkschqvmzulxlqyat5blolhlvq2fzlfhxvgq0ga29fcwtcrj0ubd48a2e3y5bs9ly96awpioonii2nho1qy92xyofilnmtncsenhj4mzkg72frd9wps223mjjizrwisdc8p7b7rex1gsmqzyg163qsxr2jki889syomv0adow6ifczl8x2ipjr3heffl5yk6x580po8iq7ravhcgr757ji2ai6ks7z2p0ag8yfemu24uqupilo0j856nmoadxpm1ynfpiywuauvw85tb73gs779thoxfna19rp9sxcpbkuhmspbdf0ds6p7ft87jm6xgfin0lw3rhrbswpljmv2ncmb14r0t31bkjfff983vr07t6zz6pah7aaveqfg == \2\e\t\a\j\y\9\e\8\z\i\4\g\i\o\c\g\b\s\y\m\p\u\2\d\2\t\k\j\w\1\3\3\s\4\m\z\h\q\a\1\b\s\5\g\4\r\z\m\q\9\6\x\4\w\w\n\n\6\7\d\m\m\2\l\5\q\z\z\l\e\b\v\p\k\i\b\6\w\x\k\b\x\o\7\d\s\l\g\a\s\t\3\i\x\o\t\b\z\8\l\t\a\l\5\a\y\5\i\0\8\s\o\x\c\6\b\o\2\n\4\h\1\e\n\2\7\k\p\o\x\c\3\l\k\s\c\h\q\v\m\z\u\l\x\l\q\y\a\t\5\b\l\o\l\h\l\v\q\2\f\z\l\f\h\x\v\g\q\0\g\a\2\9\f\c\w\t\c\r\j\0\u\b\d\4\8\a\2\e\3\y\5\b\s\9\l\y\9\6\a\w\p\i\o\o\n\i\i\2\n\h\o\1\q\y\9\2\x\y\o\f\i\l\n\m\t\n\c\s\e\n\h\j\4\m\z\k\g\7\2\f\r\d\9\w\p\s\2\2\3\m\j\j\i\z\r\w\i\s\d\c\8\p\7\b\7\r\e\x\1\g\s\m\q\z\y\g\1\6\3\q\s\x\r\2\j\k\i\8\8\9\s\y\o\m\v\0\a\d\o\w\6\i\f\c\z\l\8\x\2\i\p\j\r\3\h\e\f\f\l\5\y\k\6\x\5\8\0\p\o\8\i\q\7\r\a\v\h\c\g\r\7\5\7\j\i\2\a\i\6\k\s\7\z\2\p\0\a\g\8\y\f\e\m\u\2\4\u\q\u\p\i\l\o\0\j\8\5\6\n\m\o\a\d\x\p\m\1\y\n\f\p\i\y\w\u\a\u\v\w\8\5\t\b\7\3\g\s\7\7\9\t\h\o\x\f\n\a\1\9\r\p\9\s\x\c\p\b\k\u\h\m\s\p\b\d\f\0\d\s\6\p\7\f\t\8\7\j\m\6\x\g\f\i\n\0\l\w\3\r\h\r\b\s\w\p\l\j\m\v\2\n\c\m\b\1\4\r\0\t\3\1\b\k\j\f\f\f\9\8\3\v\r\0\7\t\6\z\z\6\p\a\h\7\a\a\v\e\q\f\g ]] 00:09:06.819 02:58:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:06.819 02:58:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:07.078 [2024-07-13 02:58:13.363132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:07.078 [2024-07-13 02:58:13.363318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65112 ] 00:09:07.078 [2024-07-13 02:58:13.534022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.337 [2024-07-13 02:58:13.707022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.596 [2024-07-13 02:58:13.864384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.534  Copying: 512/512 [B] (average 500 kBps) 00:09:08.534 00:09:08.534 02:58:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2etajy9e8zi4giocgbsympu2d2tkjw133s4mzhqa1bs5g4rzmq96x4wwnn67dmm2l5qzzlebvpkib6wxkbxo7dslgast3ixotbz8ltal5ay5i08soxc6bo2n4h1en27kpoxc3lkschqvmzulxlqyat5blolhlvq2fzlfhxvgq0ga29fcwtcrj0ubd48a2e3y5bs9ly96awpioonii2nho1qy92xyofilnmtncsenhj4mzkg72frd9wps223mjjizrwisdc8p7b7rex1gsmqzyg163qsxr2jki889syomv0adow6ifczl8x2ipjr3heffl5yk6x580po8iq7ravhcgr757ji2ai6ks7z2p0ag8yfemu24uqupilo0j856nmoadxpm1ynfpiywuauvw85tb73gs779thoxfna19rp9sxcpbkuhmspbdf0ds6p7ft87jm6xgfin0lw3rhrbswpljmv2ncmb14r0t31bkjfff983vr07t6zz6pah7aaveqfg == \2\e\t\a\j\y\9\e\8\z\i\4\g\i\o\c\g\b\s\y\m\p\u\2\d\2\t\k\j\w\1\3\3\s\4\m\z\h\q\a\1\b\s\5\g\4\r\z\m\q\9\6\x\4\w\w\n\n\6\7\d\m\m\2\l\5\q\z\z\l\e\b\v\p\k\i\b\6\w\x\k\b\x\o\7\d\s\l\g\a\s\t\3\i\x\o\t\b\z\8\l\t\a\l\5\a\y\5\i\0\8\s\o\x\c\6\b\o\2\n\4\h\1\e\n\2\7\k\p\o\x\c\3\l\k\s\c\h\q\v\m\z\u\l\x\l\q\y\a\t\5\b\l\o\l\h\l\v\q\2\f\z\l\f\h\x\v\g\q\0\g\a\2\9\f\c\w\t\c\r\j\0\u\b\d\4\8\a\2\e\3\y\5\b\s\9\l\y\9\6\a\w\p\i\o\o\n\i\i\2\n\h\o\1\q\y\9\2\x\y\o\f\i\l\n\m\t\n\c\s\e\n\h\j\4\m\z\k\g\7\2\f\r\d\9\w\p\s\2\2\3\m\j\j\i\z\r\w\i\s\d\c\8\p\7\b\7\r\e\x\1\g\s\m\q\z\y\g\1\6\3\q\s\x\r\2\j\k\i\8\8\9\s\y\o\m\v\0\a\d\o\w\6\i\f\c\z\l\8\x\2\i\p\j\r\3\h\e\f\f\l\5\y\k\6\x\5\8\0\p\o\8\i\q\7\r\a\v\h\c\g\r\7\5\7\j\i\2\a\i\6\k\s\7\z\2\p\0\a\g\8\y\f\e\m\u\2\4\u\q\u\p\i\l\o\0\j\8\5\6\n\m\o\a\d\x\p\m\1\y\n\f\p\i\y\w\u\a\u\v\w\8\5\t\b\7\3\g\s\7\7\9\t\h\o\x\f\n\a\1\9\r\p\9\s\x\c\p\b\k\u\h\m\s\p\b\d\f\0\d\s\6\p\7\f\t\8\7\j\m\6\x\g\f\i\n\0\l\w\3\r\h\r\b\s\w\p\l\j\m\v\2\n\c\m\b\1\4\r\0\t\3\1\b\k\j\f\f\f\9\8\3\v\r\0\7\t\6\z\z\6\p\a\h\7\a\a\v\e\q\f\g ]] 00:09:08.534 02:58:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:08.534 02:58:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:08.534 [2024-07-13 02:58:15.014734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:08.534 [2024-07-13 02:58:15.014930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65139 ] 00:09:08.794 [2024-07-13 02:58:15.172961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.053 [2024-07-13 02:58:15.325072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.053 [2024-07-13 02:58:15.472297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:10.249  Copying: 512/512 [B] (average 166 kBps) 00:09:10.249 00:09:10.249 02:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2etajy9e8zi4giocgbsympu2d2tkjw133s4mzhqa1bs5g4rzmq96x4wwnn67dmm2l5qzzlebvpkib6wxkbxo7dslgast3ixotbz8ltal5ay5i08soxc6bo2n4h1en27kpoxc3lkschqvmzulxlqyat5blolhlvq2fzlfhxvgq0ga29fcwtcrj0ubd48a2e3y5bs9ly96awpioonii2nho1qy92xyofilnmtncsenhj4mzkg72frd9wps223mjjizrwisdc8p7b7rex1gsmqzyg163qsxr2jki889syomv0adow6ifczl8x2ipjr3heffl5yk6x580po8iq7ravhcgr757ji2ai6ks7z2p0ag8yfemu24uqupilo0j856nmoadxpm1ynfpiywuauvw85tb73gs779thoxfna19rp9sxcpbkuhmspbdf0ds6p7ft87jm6xgfin0lw3rhrbswpljmv2ncmb14r0t31bkjfff983vr07t6zz6pah7aaveqfg == \2\e\t\a\j\y\9\e\8\z\i\4\g\i\o\c\g\b\s\y\m\p\u\2\d\2\t\k\j\w\1\3\3\s\4\m\z\h\q\a\1\b\s\5\g\4\r\z\m\q\9\6\x\4\w\w\n\n\6\7\d\m\m\2\l\5\q\z\z\l\e\b\v\p\k\i\b\6\w\x\k\b\x\o\7\d\s\l\g\a\s\t\3\i\x\o\t\b\z\8\l\t\a\l\5\a\y\5\i\0\8\s\o\x\c\6\b\o\2\n\4\h\1\e\n\2\7\k\p\o\x\c\3\l\k\s\c\h\q\v\m\z\u\l\x\l\q\y\a\t\5\b\l\o\l\h\l\v\q\2\f\z\l\f\h\x\v\g\q\0\g\a\2\9\f\c\w\t\c\r\j\0\u\b\d\4\8\a\2\e\3\y\5\b\s\9\l\y\9\6\a\w\p\i\o\o\n\i\i\2\n\h\o\1\q\y\9\2\x\y\o\f\i\l\n\m\t\n\c\s\e\n\h\j\4\m\z\k\g\7\2\f\r\d\9\w\p\s\2\2\3\m\j\j\i\z\r\w\i\s\d\c\8\p\7\b\7\r\e\x\1\g\s\m\q\z\y\g\1\6\3\q\s\x\r\2\j\k\i\8\8\9\s\y\o\m\v\0\a\d\o\w\6\i\f\c\z\l\8\x\2\i\p\j\r\3\h\e\f\f\l\5\y\k\6\x\5\8\0\p\o\8\i\q\7\r\a\v\h\c\g\r\7\5\7\j\i\2\a\i\6\k\s\7\z\2\p\0\a\g\8\y\f\e\m\u\2\4\u\q\u\p\i\l\o\0\j\8\5\6\n\m\o\a\d\x\p\m\1\y\n\f\p\i\y\w\u\a\u\v\w\8\5\t\b\7\3\g\s\7\7\9\t\h\o\x\f\n\a\1\9\r\p\9\s\x\c\p\b\k\u\h\m\s\p\b\d\f\0\d\s\6\p\7\f\t\8\7\j\m\6\x\g\f\i\n\0\l\w\3\r\h\r\b\s\w\p\l\j\m\v\2\n\c\m\b\1\4\r\0\t\3\1\b\k\j\f\f\f\9\8\3\v\r\0\7\t\6\z\z\6\p\a\h\7\a\a\v\e\q\f\g ]] 00:09:10.249 02:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:10.249 02:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:10.249 [2024-07-13 02:58:16.728702] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:10.249 [2024-07-13 02:58:16.728925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65155 ] 00:09:10.508 [2024-07-13 02:58:16.903378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.767 [2024-07-13 02:58:17.068825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.767 [2024-07-13 02:58:17.219193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.964  Copying: 512/512 [B] (average 500 kBps) 00:09:11.964 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2etajy9e8zi4giocgbsympu2d2tkjw133s4mzhqa1bs5g4rzmq96x4wwnn67dmm2l5qzzlebvpkib6wxkbxo7dslgast3ixotbz8ltal5ay5i08soxc6bo2n4h1en27kpoxc3lkschqvmzulxlqyat5blolhlvq2fzlfhxvgq0ga29fcwtcrj0ubd48a2e3y5bs9ly96awpioonii2nho1qy92xyofilnmtncsenhj4mzkg72frd9wps223mjjizrwisdc8p7b7rex1gsmqzyg163qsxr2jki889syomv0adow6ifczl8x2ipjr3heffl5yk6x580po8iq7ravhcgr757ji2ai6ks7z2p0ag8yfemu24uqupilo0j856nmoadxpm1ynfpiywuauvw85tb73gs779thoxfna19rp9sxcpbkuhmspbdf0ds6p7ft87jm6xgfin0lw3rhrbswpljmv2ncmb14r0t31bkjfff983vr07t6zz6pah7aaveqfg == \2\e\t\a\j\y\9\e\8\z\i\4\g\i\o\c\g\b\s\y\m\p\u\2\d\2\t\k\j\w\1\3\3\s\4\m\z\h\q\a\1\b\s\5\g\4\r\z\m\q\9\6\x\4\w\w\n\n\6\7\d\m\m\2\l\5\q\z\z\l\e\b\v\p\k\i\b\6\w\x\k\b\x\o\7\d\s\l\g\a\s\t\3\i\x\o\t\b\z\8\l\t\a\l\5\a\y\5\i\0\8\s\o\x\c\6\b\o\2\n\4\h\1\e\n\2\7\k\p\o\x\c\3\l\k\s\c\h\q\v\m\z\u\l\x\l\q\y\a\t\5\b\l\o\l\h\l\v\q\2\f\z\l\f\h\x\v\g\q\0\g\a\2\9\f\c\w\t\c\r\j\0\u\b\d\4\8\a\2\e\3\y\5\b\s\9\l\y\9\6\a\w\p\i\o\o\n\i\i\2\n\h\o\1\q\y\9\2\x\y\o\f\i\l\n\m\t\n\c\s\e\n\h\j\4\m\z\k\g\7\2\f\r\d\9\w\p\s\2\2\3\m\j\j\i\z\r\w\i\s\d\c\8\p\7\b\7\r\e\x\1\g\s\m\q\z\y\g\1\6\3\q\s\x\r\2\j\k\i\8\8\9\s\y\o\m\v\0\a\d\o\w\6\i\f\c\z\l\8\x\2\i\p\j\r\3\h\e\f\f\l\5\y\k\6\x\5\8\0\p\o\8\i\q\7\r\a\v\h\c\g\r\7\5\7\j\i\2\a\i\6\k\s\7\z\2\p\0\a\g\8\y\f\e\m\u\2\4\u\q\u\p\i\l\o\0\j\8\5\6\n\m\o\a\d\x\p\m\1\y\n\f\p\i\y\w\u\a\u\v\w\8\5\t\b\7\3\g\s\7\7\9\t\h\o\x\f\n\a\1\9\r\p\9\s\x\c\p\b\k\u\h\m\s\p\b\d\f\0\d\s\6\p\7\f\t\8\7\j\m\6\x\g\f\i\n\0\l\w\3\r\h\r\b\s\w\p\l\j\m\v\2\n\c\m\b\1\4\r\0\t\3\1\b\k\j\f\f\f\9\8\3\v\r\0\7\t\6\z\z\6\p\a\h\7\a\a\v\e\q\f\g ]] 00:09:11.964 00:09:11.964 real 0m13.744s 00:09:11.964 user 0m11.316s 00:09:11.964 sys 0m6.500s 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:11.964 ************************************ 00:09:11.964 END TEST dd_flags_misc 00:09:11.964 ************************************ 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:11.964 * Second test run, disabling liburing, forcing AIO 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:11.964 ************************************ 00:09:11.964 START TEST dd_flag_append_forced_aio 00:09:11.964 ************************************ 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=bzr49bq32db6ep1t1e61830rr9j7bezq 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:11.964 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:11.965 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=e42kxs5naxcnjk4nlvk71knqzgsvv7q8 00:09:11.965 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s bzr49bq32db6ep1t1e61830rr9j7bezq 00:09:11.965 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s e42kxs5naxcnjk4nlvk71knqzgsvv7q8 00:09:11.965 02:58:18 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:11.965 [2024-07-13 02:58:18.398424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.965 [2024-07-13 02:58:18.398607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65201 ] 00:09:12.226 [2024-07-13 02:58:18.567620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.226 [2024-07-13 02:58:18.717413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.485 [2024-07-13 02:58:18.862838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.420  Copying: 32/32 [B] (average 31 kBps) 00:09:13.420 00:09:13.420 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ e42kxs5naxcnjk4nlvk71knqzgsvv7q8bzr49bq32db6ep1t1e61830rr9j7bezq == \e\4\2\k\x\s\5\n\a\x\c\n\j\k\4\n\l\v\k\7\1\k\n\q\z\g\s\v\v\7\q\8\b\z\r\4\9\b\q\3\2\d\b\6\e\p\1\t\1\e\6\1\8\3\0\r\r\9\j\7\b\e\z\q ]] 00:09:13.420 00:09:13.420 real 0m1.579s 00:09:13.420 user 0m1.275s 00:09:13.420 sys 0m0.184s 00:09:13.420 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.420 ************************************ 00:09:13.420 END TEST dd_flag_append_forced_aio 00:09:13.420 ************************************ 00:09:13.420 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:13.679 ************************************ 00:09:13.679 START TEST dd_flag_directory_forced_aio 00:09:13.679 ************************************ 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:13.679 02:58:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.679 [2024-07-13 02:58:20.019468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:13.679 [2024-07-13 02:58:20.019654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65234 ] 00:09:13.938 [2024-07-13 02:58:20.174956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.938 [2024-07-13 02:58:20.334915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.199 [2024-07-13 02:58:20.483719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.199 [2024-07-13 02:58:20.558609] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.199 [2024-07-13 02:58:20.558684] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.199 [2024-07-13 02:58:20.558721] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.766 [2024-07-13 02:58:21.118588] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:15.024 02:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:15.283 [2024-07-13 02:58:21.582282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:15.283 [2024-07-13 02:58:21.582482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65261 ] 00:09:15.283 [2024-07-13 02:58:21.752355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.542 [2024-07-13 02:58:21.915616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.802 [2024-07-13 02:58:22.066328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.802 [2024-07-13 02:58:22.144549] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:15.802 [2024-07-13 02:58:22.144627] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:15.802 [2024-07-13 02:58:22.144668] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.369 [2024-07-13 02:58:22.704237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:16.627 00:09:16.627 real 0m3.143s 00:09:16.627 user 0m2.577s 00:09:16.627 sys 0m0.348s 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:16.627 ************************************ 00:09:16.627 END TEST dd_flag_directory_forced_aio 00:09:16.627 ************************************ 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.627 02:58:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:16.627 ************************************ 00:09:16.627 START TEST dd_flag_nofollow_forced_aio 00:09:16.627 ************************************ 00:09:16.628 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:09:16.628 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:16.886 02:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.886 [2024-07-13 02:58:23.231389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:16.886 [2024-07-13 02:58:23.231600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65302 ] 00:09:17.145 [2024-07-13 02:58:23.399040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.145 [2024-07-13 02:58:23.554203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.403 [2024-07-13 02:58:23.703547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.403 [2024-07-13 02:58:23.778351] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:17.403 [2024-07-13 02:58:23.778422] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:17.403 [2024-07-13 02:58:23.778461] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:17.968 [2024-07-13 02:58:24.424389] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:18.535 02:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:18.535 [2024-07-13 02:58:24.933642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.535 [2024-07-13 02:58:24.933845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65323 ] 00:09:18.794 [2024-07-13 02:58:25.103680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.794 [2024-07-13 02:58:25.268181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.053 [2024-07-13 02:58:25.429613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.053 [2024-07-13 02:58:25.512335] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:19.053 [2024-07-13 02:58:25.512413] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:19.053 [2024-07-13 02:58:25.512453] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.620 [2024-07-13 02:58:26.083102] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:20.188 02:58:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.188 [2024-07-13 02:58:26.563418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:20.188 [2024-07-13 02:58:26.563605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65343 ] 00:09:20.446 [2024-07-13 02:58:26.732961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.446 [2024-07-13 02:58:26.889744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.705 [2024-07-13 02:58:27.035552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.653  Copying: 512/512 [B] (average 500 kBps) 00:09:21.653 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ n1yx4irzsqgvh2njc8byp655tlnohwl5k67b8z3d9cip9el5l67i03hvmcgsoal3ri7vt0v6bmyye40dqjq6cbb9it4r61tr5jar046z56m2x2i6bgs9go01t9b2dn8pprx1xxqujg7bvxtrmuo0g0mlkt4julquur35x5drovcv9ifa4heya4te9vn5fqr6y2r83sidv2l6r05b19s1atkj8pcq5ji84co5iocz2dokdui6gsgwbgpynjjwfdporb53jd3la6ikl3un6snwhgxcbqel3rrhnhi76c9vs7tuhvo7zab6t8ypesft9d626oxvbaxjaem6f5bh6h8ukh908l0jd10bgvd9jfohzg3bv5wejzaflvlr18mkwkwn08pey4qayyolqaj1gfzkkzgrbvyko0ftibpfci1ser1hdk9mhlj1lxdf6wqg8a329tdkjbtr3vkgxtafscwvmn7axkovgr1bg3h8oa45287puzyujeux4y93rgoannww == \n\1\y\x\4\i\r\z\s\q\g\v\h\2\n\j\c\8\b\y\p\6\5\5\t\l\n\o\h\w\l\5\k\6\7\b\8\z\3\d\9\c\i\p\9\e\l\5\l\6\7\i\0\3\h\v\m\c\g\s\o\a\l\3\r\i\7\v\t\0\v\6\b\m\y\y\e\4\0\d\q\j\q\6\c\b\b\9\i\t\4\r\6\1\t\r\5\j\a\r\0\4\6\z\5\6\m\2\x\2\i\6\b\g\s\9\g\o\0\1\t\9\b\2\d\n\8\p\p\r\x\1\x\x\q\u\j\g\7\b\v\x\t\r\m\u\o\0\g\0\m\l\k\t\4\j\u\l\q\u\u\r\3\5\x\5\d\r\o\v\c\v\9\i\f\a\4\h\e\y\a\4\t\e\9\v\n\5\f\q\r\6\y\2\r\8\3\s\i\d\v\2\l\6\r\0\5\b\1\9\s\1\a\t\k\j\8\p\c\q\5\j\i\8\4\c\o\5\i\o\c\z\2\d\o\k\d\u\i\6\g\s\g\w\b\g\p\y\n\j\j\w\f\d\p\o\r\b\5\3\j\d\3\l\a\6\i\k\l\3\u\n\6\s\n\w\h\g\x\c\b\q\e\l\3\r\r\h\n\h\i\7\6\c\9\v\s\7\t\u\h\v\o\7\z\a\b\6\t\8\y\p\e\s\f\t\9\d\6\2\6\o\x\v\b\a\x\j\a\e\m\6\f\5\b\h\6\h\8\u\k\h\9\0\8\l\0\j\d\1\0\b\g\v\d\9\j\f\o\h\z\g\3\b\v\5\w\e\j\z\a\f\l\v\l\r\1\8\m\k\w\k\w\n\0\8\p\e\y\4\q\a\y\y\o\l\q\a\j\1\g\f\z\k\k\z\g\r\b\v\y\k\o\0\f\t\i\b\p\f\c\i\1\s\e\r\1\h\d\k\9\m\h\l\j\1\l\x\d\f\6\w\q\g\8\a\3\2\9\t\d\k\j\b\t\r\3\v\k\g\x\t\a\f\s\c\w\v\m\n\7\a\x\k\o\v\g\r\1\b\g\3\h\8\o\a\4\5\2\8\7\p\u\z\y\u\j\e\u\x\4\y\9\3\r\g\o\a\n\n\w\w ]] 00:09:21.653 00:09:21.653 real 0m4.947s 00:09:21.653 user 0m4.037s 00:09:21.653 sys 0m0.568s 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:21.653 ************************************ 00:09:21.653 END TEST dd_flag_nofollow_forced_aio 00:09:21.653 ************************************ 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:21.653 ************************************ 00:09:21.653 START TEST dd_flag_noatime_forced_aio 00:09:21.653 ************************************ 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720839507 00:09:21.653 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:21.911 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720839508 00:09:21.911 02:58:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:22.848 02:58:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.848 [2024-07-13 02:58:29.256513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:22.848 [2024-07-13 02:58:29.257223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65395 ] 00:09:23.108 [2024-07-13 02:58:29.426116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.108 [2024-07-13 02:58:29.582954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.366 [2024-07-13 02:58:29.738198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.742  Copying: 512/512 [B] (average 500 kBps) 00:09:24.742 00:09:24.742 02:58:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:24.742 02:58:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720839507 )) 00:09:24.742 02:58:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.742 02:58:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720839508 )) 00:09:24.742 02:58:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.742 [2024-07-13 02:58:30.958266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:24.742 [2024-07-13 02:58:30.958435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65419 ] 00:09:24.743 [2024-07-13 02:58:31.130330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.001 [2024-07-13 02:58:31.299576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.001 [2024-07-13 02:58:31.459613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.195  Copying: 512/512 [B] (average 500 kBps) 00:09:26.195 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720839511 )) 00:09:26.195 00:09:26.195 real 0m4.387s 00:09:26.195 user 0m2.742s 00:09:26.195 sys 0m0.401s 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:26.195 ************************************ 00:09:26.195 END TEST dd_flag_noatime_forced_aio 00:09:26.195 ************************************ 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:26.195 ************************************ 00:09:26.195 START TEST dd_flags_misc_forced_aio 00:09:26.195 ************************************ 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:26.195 02:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:26.195 [2024-07-13 02:58:32.651838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:26.195 [2024-07-13 02:58:32.652023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65463 ] 00:09:26.454 [2024-07-13 02:58:32.807581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.713 [2024-07-13 02:58:32.971916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.713 [2024-07-13 02:58:33.127245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.089  Copying: 512/512 [B] (average 500 kBps) 00:09:28.089 00:09:28.089 02:58:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wi3abzterwkp18ie57r02urdtz0oa0o67ns7mhsul7ghqa3nnbz76ttyxtis8ljh3wmj1doh2g1gksphq87skby08e4dkbh8zr1zjqjickkjpzas6t1uti978ehn4dme46b77ln79acbc3p95rgyhm2u9imzb0ax5ad6n9x6jnhhrwutxdst8mwq2mkubpvn66cowj4xjoe6nsd1wzhq3w40fk98w2r3fdz5idtsnlt0nnu4svqshhsutbkn6qgj2tdjphqepc7edjryb59bq2qhvqpz21eoiwrzpycmok8vhh82qba0y3cma85opxy2amix13x5rvb0o5z4caw7ck0srttq73yn56df50z7ek82gjx7zznu0jlp6y3t5sb5hj9uqf2ym5f6tej6e8p9g4je2g4srdzdszzwn60hyn1rzvl3p2hnolcx98kvn10jkajtmush4ssberdex6fooznzt2fdtgaayflsfljj6kdsrdtyahef898vmyavyp20 == \w\i\3\a\b\z\t\e\r\w\k\p\1\8\i\e\5\7\r\0\2\u\r\d\t\z\0\o\a\0\o\6\7\n\s\7\m\h\s\u\l\7\g\h\q\a\3\n\n\b\z\7\6\t\t\y\x\t\i\s\8\l\j\h\3\w\m\j\1\d\o\h\2\g\1\g\k\s\p\h\q\8\7\s\k\b\y\0\8\e\4\d\k\b\h\8\z\r\1\z\j\q\j\i\c\k\k\j\p\z\a\s\6\t\1\u\t\i\9\7\8\e\h\n\4\d\m\e\4\6\b\7\7\l\n\7\9\a\c\b\c\3\p\9\5\r\g\y\h\m\2\u\9\i\m\z\b\0\a\x\5\a\d\6\n\9\x\6\j\n\h\h\r\w\u\t\x\d\s\t\8\m\w\q\2\m\k\u\b\p\v\n\6\6\c\o\w\j\4\x\j\o\e\6\n\s\d\1\w\z\h\q\3\w\4\0\f\k\9\8\w\2\r\3\f\d\z\5\i\d\t\s\n\l\t\0\n\n\u\4\s\v\q\s\h\h\s\u\t\b\k\n\6\q\g\j\2\t\d\j\p\h\q\e\p\c\7\e\d\j\r\y\b\5\9\b\q\2\q\h\v\q\p\z\2\1\e\o\i\w\r\z\p\y\c\m\o\k\8\v\h\h\8\2\q\b\a\0\y\3\c\m\a\8\5\o\p\x\y\2\a\m\i\x\1\3\x\5\r\v\b\0\o\5\z\4\c\a\w\7\c\k\0\s\r\t\t\q\7\3\y\n\5\6\d\f\5\0\z\7\e\k\8\2\g\j\x\7\z\z\n\u\0\j\l\p\6\y\3\t\5\s\b\5\h\j\9\u\q\f\2\y\m\5\f\6\t\e\j\6\e\8\p\9\g\4\j\e\2\g\4\s\r\d\z\d\s\z\z\w\n\6\0\h\y\n\1\r\z\v\l\3\p\2\h\n\o\l\c\x\9\8\k\v\n\1\0\j\k\a\j\t\m\u\s\h\4\s\s\b\e\r\d\e\x\6\f\o\o\z\n\z\t\2\f\d\t\g\a\a\y\f\l\s\f\l\j\j\6\k\d\s\r\d\t\y\a\h\e\f\8\9\8\v\m\y\a\v\y\p\2\0 ]] 00:09:28.089 02:58:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:28.089 02:58:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:28.089 [2024-07-13 02:58:34.300080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.089 [2024-07-13 02:58:34.300245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65477 ] 00:09:28.089 [2024-07-13 02:58:34.470410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.349 [2024-07-13 02:58:34.640812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.349 [2024-07-13 02:58:34.791200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.546  Copying: 512/512 [B] (average 500 kBps) 00:09:29.546 00:09:29.546 02:58:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wi3abzterwkp18ie57r02urdtz0oa0o67ns7mhsul7ghqa3nnbz76ttyxtis8ljh3wmj1doh2g1gksphq87skby08e4dkbh8zr1zjqjickkjpzas6t1uti978ehn4dme46b77ln79acbc3p95rgyhm2u9imzb0ax5ad6n9x6jnhhrwutxdst8mwq2mkubpvn66cowj4xjoe6nsd1wzhq3w40fk98w2r3fdz5idtsnlt0nnu4svqshhsutbkn6qgj2tdjphqepc7edjryb59bq2qhvqpz21eoiwrzpycmok8vhh82qba0y3cma85opxy2amix13x5rvb0o5z4caw7ck0srttq73yn56df50z7ek82gjx7zznu0jlp6y3t5sb5hj9uqf2ym5f6tej6e8p9g4je2g4srdzdszzwn60hyn1rzvl3p2hnolcx98kvn10jkajtmush4ssberdex6fooznzt2fdtgaayflsfljj6kdsrdtyahef898vmyavyp20 == \w\i\3\a\b\z\t\e\r\w\k\p\1\8\i\e\5\7\r\0\2\u\r\d\t\z\0\o\a\0\o\6\7\n\s\7\m\h\s\u\l\7\g\h\q\a\3\n\n\b\z\7\6\t\t\y\x\t\i\s\8\l\j\h\3\w\m\j\1\d\o\h\2\g\1\g\k\s\p\h\q\8\7\s\k\b\y\0\8\e\4\d\k\b\h\8\z\r\1\z\j\q\j\i\c\k\k\j\p\z\a\s\6\t\1\u\t\i\9\7\8\e\h\n\4\d\m\e\4\6\b\7\7\l\n\7\9\a\c\b\c\3\p\9\5\r\g\y\h\m\2\u\9\i\m\z\b\0\a\x\5\a\d\6\n\9\x\6\j\n\h\h\r\w\u\t\x\d\s\t\8\m\w\q\2\m\k\u\b\p\v\n\6\6\c\o\w\j\4\x\j\o\e\6\n\s\d\1\w\z\h\q\3\w\4\0\f\k\9\8\w\2\r\3\f\d\z\5\i\d\t\s\n\l\t\0\n\n\u\4\s\v\q\s\h\h\s\u\t\b\k\n\6\q\g\j\2\t\d\j\p\h\q\e\p\c\7\e\d\j\r\y\b\5\9\b\q\2\q\h\v\q\p\z\2\1\e\o\i\w\r\z\p\y\c\m\o\k\8\v\h\h\8\2\q\b\a\0\y\3\c\m\a\8\5\o\p\x\y\2\a\m\i\x\1\3\x\5\r\v\b\0\o\5\z\4\c\a\w\7\c\k\0\s\r\t\t\q\7\3\y\n\5\6\d\f\5\0\z\7\e\k\8\2\g\j\x\7\z\z\n\u\0\j\l\p\6\y\3\t\5\s\b\5\h\j\9\u\q\f\2\y\m\5\f\6\t\e\j\6\e\8\p\9\g\4\j\e\2\g\4\s\r\d\z\d\s\z\z\w\n\6\0\h\y\n\1\r\z\v\l\3\p\2\h\n\o\l\c\x\9\8\k\v\n\1\0\j\k\a\j\t\m\u\s\h\4\s\s\b\e\r\d\e\x\6\f\o\o\z\n\z\t\2\f\d\t\g\a\a\y\f\l\s\f\l\j\j\6\k\d\s\r\d\t\y\a\h\e\f\8\9\8\v\m\y\a\v\y\p\2\0 ]] 00:09:29.546 02:58:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:29.546 02:58:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:29.546 [2024-07-13 02:58:35.987678] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:29.546 [2024-07-13 02:58:35.987846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65502 ] 00:09:29.806 [2024-07-13 02:58:36.148459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.065 [2024-07-13 02:58:36.317124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.065 [2024-07-13 02:58:36.485152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.260  Copying: 512/512 [B] (average 166 kBps) 00:09:31.260 00:09:31.260 02:58:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wi3abzterwkp18ie57r02urdtz0oa0o67ns7mhsul7ghqa3nnbz76ttyxtis8ljh3wmj1doh2g1gksphq87skby08e4dkbh8zr1zjqjickkjpzas6t1uti978ehn4dme46b77ln79acbc3p95rgyhm2u9imzb0ax5ad6n9x6jnhhrwutxdst8mwq2mkubpvn66cowj4xjoe6nsd1wzhq3w40fk98w2r3fdz5idtsnlt0nnu4svqshhsutbkn6qgj2tdjphqepc7edjryb59bq2qhvqpz21eoiwrzpycmok8vhh82qba0y3cma85opxy2amix13x5rvb0o5z4caw7ck0srttq73yn56df50z7ek82gjx7zznu0jlp6y3t5sb5hj9uqf2ym5f6tej6e8p9g4je2g4srdzdszzwn60hyn1rzvl3p2hnolcx98kvn10jkajtmush4ssberdex6fooznzt2fdtgaayflsfljj6kdsrdtyahef898vmyavyp20 == \w\i\3\a\b\z\t\e\r\w\k\p\1\8\i\e\5\7\r\0\2\u\r\d\t\z\0\o\a\0\o\6\7\n\s\7\m\h\s\u\l\7\g\h\q\a\3\n\n\b\z\7\6\t\t\y\x\t\i\s\8\l\j\h\3\w\m\j\1\d\o\h\2\g\1\g\k\s\p\h\q\8\7\s\k\b\y\0\8\e\4\d\k\b\h\8\z\r\1\z\j\q\j\i\c\k\k\j\p\z\a\s\6\t\1\u\t\i\9\7\8\e\h\n\4\d\m\e\4\6\b\7\7\l\n\7\9\a\c\b\c\3\p\9\5\r\g\y\h\m\2\u\9\i\m\z\b\0\a\x\5\a\d\6\n\9\x\6\j\n\h\h\r\w\u\t\x\d\s\t\8\m\w\q\2\m\k\u\b\p\v\n\6\6\c\o\w\j\4\x\j\o\e\6\n\s\d\1\w\z\h\q\3\w\4\0\f\k\9\8\w\2\r\3\f\d\z\5\i\d\t\s\n\l\t\0\n\n\u\4\s\v\q\s\h\h\s\u\t\b\k\n\6\q\g\j\2\t\d\j\p\h\q\e\p\c\7\e\d\j\r\y\b\5\9\b\q\2\q\h\v\q\p\z\2\1\e\o\i\w\r\z\p\y\c\m\o\k\8\v\h\h\8\2\q\b\a\0\y\3\c\m\a\8\5\o\p\x\y\2\a\m\i\x\1\3\x\5\r\v\b\0\o\5\z\4\c\a\w\7\c\k\0\s\r\t\t\q\7\3\y\n\5\6\d\f\5\0\z\7\e\k\8\2\g\j\x\7\z\z\n\u\0\j\l\p\6\y\3\t\5\s\b\5\h\j\9\u\q\f\2\y\m\5\f\6\t\e\j\6\e\8\p\9\g\4\j\e\2\g\4\s\r\d\z\d\s\z\z\w\n\6\0\h\y\n\1\r\z\v\l\3\p\2\h\n\o\l\c\x\9\8\k\v\n\1\0\j\k\a\j\t\m\u\s\h\4\s\s\b\e\r\d\e\x\6\f\o\o\z\n\z\t\2\f\d\t\g\a\a\y\f\l\s\f\l\j\j\6\k\d\s\r\d\t\y\a\h\e\f\8\9\8\v\m\y\a\v\y\p\2\0 ]] 00:09:31.260 02:58:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:31.260 02:58:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:31.260 [2024-07-13 02:58:37.670235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:31.260 [2024-07-13 02:58:37.670403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65522 ] 00:09:31.518 [2024-07-13 02:58:37.844034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.518 [2024-07-13 02:58:38.008216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.775 [2024-07-13 02:58:38.170135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.148  Copying: 512/512 [B] (average 500 kBps) 00:09:33.148 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wi3abzterwkp18ie57r02urdtz0oa0o67ns7mhsul7ghqa3nnbz76ttyxtis8ljh3wmj1doh2g1gksphq87skby08e4dkbh8zr1zjqjickkjpzas6t1uti978ehn4dme46b77ln79acbc3p95rgyhm2u9imzb0ax5ad6n9x6jnhhrwutxdst8mwq2mkubpvn66cowj4xjoe6nsd1wzhq3w40fk98w2r3fdz5idtsnlt0nnu4svqshhsutbkn6qgj2tdjphqepc7edjryb59bq2qhvqpz21eoiwrzpycmok8vhh82qba0y3cma85opxy2amix13x5rvb0o5z4caw7ck0srttq73yn56df50z7ek82gjx7zznu0jlp6y3t5sb5hj9uqf2ym5f6tej6e8p9g4je2g4srdzdszzwn60hyn1rzvl3p2hnolcx98kvn10jkajtmush4ssberdex6fooznzt2fdtgaayflsfljj6kdsrdtyahef898vmyavyp20 == \w\i\3\a\b\z\t\e\r\w\k\p\1\8\i\e\5\7\r\0\2\u\r\d\t\z\0\o\a\0\o\6\7\n\s\7\m\h\s\u\l\7\g\h\q\a\3\n\n\b\z\7\6\t\t\y\x\t\i\s\8\l\j\h\3\w\m\j\1\d\o\h\2\g\1\g\k\s\p\h\q\8\7\s\k\b\y\0\8\e\4\d\k\b\h\8\z\r\1\z\j\q\j\i\c\k\k\j\p\z\a\s\6\t\1\u\t\i\9\7\8\e\h\n\4\d\m\e\4\6\b\7\7\l\n\7\9\a\c\b\c\3\p\9\5\r\g\y\h\m\2\u\9\i\m\z\b\0\a\x\5\a\d\6\n\9\x\6\j\n\h\h\r\w\u\t\x\d\s\t\8\m\w\q\2\m\k\u\b\p\v\n\6\6\c\o\w\j\4\x\j\o\e\6\n\s\d\1\w\z\h\q\3\w\4\0\f\k\9\8\w\2\r\3\f\d\z\5\i\d\t\s\n\l\t\0\n\n\u\4\s\v\q\s\h\h\s\u\t\b\k\n\6\q\g\j\2\t\d\j\p\h\q\e\p\c\7\e\d\j\r\y\b\5\9\b\q\2\q\h\v\q\p\z\2\1\e\o\i\w\r\z\p\y\c\m\o\k\8\v\h\h\8\2\q\b\a\0\y\3\c\m\a\8\5\o\p\x\y\2\a\m\i\x\1\3\x\5\r\v\b\0\o\5\z\4\c\a\w\7\c\k\0\s\r\t\t\q\7\3\y\n\5\6\d\f\5\0\z\7\e\k\8\2\g\j\x\7\z\z\n\u\0\j\l\p\6\y\3\t\5\s\b\5\h\j\9\u\q\f\2\y\m\5\f\6\t\e\j\6\e\8\p\9\g\4\j\e\2\g\4\s\r\d\z\d\s\z\z\w\n\6\0\h\y\n\1\r\z\v\l\3\p\2\h\n\o\l\c\x\9\8\k\v\n\1\0\j\k\a\j\t\m\u\s\h\4\s\s\b\e\r\d\e\x\6\f\o\o\z\n\z\t\2\f\d\t\g\a\a\y\f\l\s\f\l\j\j\6\k\d\s\r\d\t\y\a\h\e\f\8\9\8\v\m\y\a\v\y\p\2\0 ]] 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:33.148 02:58:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:33.148 [2024-07-13 02:58:39.365962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:33.148 [2024-07-13 02:58:39.366146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65541 ] 00:09:33.148 [2024-07-13 02:58:39.541943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.404 [2024-07-13 02:58:39.737361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.661 [2024-07-13 02:58:39.927154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.595  Copying: 512/512 [B] (average 500 kBps) 00:09:34.595 00:09:34.854 02:58:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ffwvhmhju10ru095ott133hnfqfosab3rbx3erufmzjnaweqrb0epz3zp9p31lu6pjmb0ql4g0xo94pzewshrdbpenutepb97jddgy0nxm9e8lhgu43vm9y1l3i96xxfn3eslednh1s89xpcaccg4qqnuu4fucekdx9u7auw9f51ty5bewcpckj8c8dskj397op0j29k6gu6wb2krzj9qn8xmv0z6lzz15r1xh1uo8a4q8nevyqo96kuyd0s9iwpherh9tqssyyp1vx279zcvogksotwaz52pdd6s1z7o6tdukkzhko3qmw06m1w7k3z2g6vdbv7y483f2a8rhtz33gq9dgn78gm5jhqlvvrl02iq22l4yhbivktml1a9o0zud9vocawku19rrhj8lk7spe1frt0l9gwnm0ci8b3g6nuradmcbfd33dv97yn6judem9akvi0vfs4s9jyimop81mt96fl4wprxac8yurjqp2s3iplvzsnn2rptiezm4m == \8\f\f\w\v\h\m\h\j\u\1\0\r\u\0\9\5\o\t\t\1\3\3\h\n\f\q\f\o\s\a\b\3\r\b\x\3\e\r\u\f\m\z\j\n\a\w\e\q\r\b\0\e\p\z\3\z\p\9\p\3\1\l\u\6\p\j\m\b\0\q\l\4\g\0\x\o\9\4\p\z\e\w\s\h\r\d\b\p\e\n\u\t\e\p\b\9\7\j\d\d\g\y\0\n\x\m\9\e\8\l\h\g\u\4\3\v\m\9\y\1\l\3\i\9\6\x\x\f\n\3\e\s\l\e\d\n\h\1\s\8\9\x\p\c\a\c\c\g\4\q\q\n\u\u\4\f\u\c\e\k\d\x\9\u\7\a\u\w\9\f\5\1\t\y\5\b\e\w\c\p\c\k\j\8\c\8\d\s\k\j\3\9\7\o\p\0\j\2\9\k\6\g\u\6\w\b\2\k\r\z\j\9\q\n\8\x\m\v\0\z\6\l\z\z\1\5\r\1\x\h\1\u\o\8\a\4\q\8\n\e\v\y\q\o\9\6\k\u\y\d\0\s\9\i\w\p\h\e\r\h\9\t\q\s\s\y\y\p\1\v\x\2\7\9\z\c\v\o\g\k\s\o\t\w\a\z\5\2\p\d\d\6\s\1\z\7\o\6\t\d\u\k\k\z\h\k\o\3\q\m\w\0\6\m\1\w\7\k\3\z\2\g\6\v\d\b\v\7\y\4\8\3\f\2\a\8\r\h\t\z\3\3\g\q\9\d\g\n\7\8\g\m\5\j\h\q\l\v\v\r\l\0\2\i\q\2\2\l\4\y\h\b\i\v\k\t\m\l\1\a\9\o\0\z\u\d\9\v\o\c\a\w\k\u\1\9\r\r\h\j\8\l\k\7\s\p\e\1\f\r\t\0\l\9\g\w\n\m\0\c\i\8\b\3\g\6\n\u\r\a\d\m\c\b\f\d\3\3\d\v\9\7\y\n\6\j\u\d\e\m\9\a\k\v\i\0\v\f\s\4\s\9\j\y\i\m\o\p\8\1\m\t\9\6\f\l\4\w\p\r\x\a\c\8\y\u\r\j\q\p\2\s\3\i\p\l\v\z\s\n\n\2\r\p\t\i\e\z\m\4\m ]] 00:09:34.854 02:58:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:34.854 02:58:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:34.854 [2024-07-13 02:58:41.195966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:34.854 [2024-07-13 02:58:41.196130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65566 ] 00:09:35.121 [2024-07-13 02:58:41.364957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.121 [2024-07-13 02:58:41.535223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.388 [2024-07-13 02:58:41.704339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.325  Copying: 512/512 [B] (average 500 kBps) 00:09:36.325 00:09:36.325 02:58:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ffwvhmhju10ru095ott133hnfqfosab3rbx3erufmzjnaweqrb0epz3zp9p31lu6pjmb0ql4g0xo94pzewshrdbpenutepb97jddgy0nxm9e8lhgu43vm9y1l3i96xxfn3eslednh1s89xpcaccg4qqnuu4fucekdx9u7auw9f51ty5bewcpckj8c8dskj397op0j29k6gu6wb2krzj9qn8xmv0z6lzz15r1xh1uo8a4q8nevyqo96kuyd0s9iwpherh9tqssyyp1vx279zcvogksotwaz52pdd6s1z7o6tdukkzhko3qmw06m1w7k3z2g6vdbv7y483f2a8rhtz33gq9dgn78gm5jhqlvvrl02iq22l4yhbivktml1a9o0zud9vocawku19rrhj8lk7spe1frt0l9gwnm0ci8b3g6nuradmcbfd33dv97yn6judem9akvi0vfs4s9jyimop81mt96fl4wprxac8yurjqp2s3iplvzsnn2rptiezm4m == \8\f\f\w\v\h\m\h\j\u\1\0\r\u\0\9\5\o\t\t\1\3\3\h\n\f\q\f\o\s\a\b\3\r\b\x\3\e\r\u\f\m\z\j\n\a\w\e\q\r\b\0\e\p\z\3\z\p\9\p\3\1\l\u\6\p\j\m\b\0\q\l\4\g\0\x\o\9\4\p\z\e\w\s\h\r\d\b\p\e\n\u\t\e\p\b\9\7\j\d\d\g\y\0\n\x\m\9\e\8\l\h\g\u\4\3\v\m\9\y\1\l\3\i\9\6\x\x\f\n\3\e\s\l\e\d\n\h\1\s\8\9\x\p\c\a\c\c\g\4\q\q\n\u\u\4\f\u\c\e\k\d\x\9\u\7\a\u\w\9\f\5\1\t\y\5\b\e\w\c\p\c\k\j\8\c\8\d\s\k\j\3\9\7\o\p\0\j\2\9\k\6\g\u\6\w\b\2\k\r\z\j\9\q\n\8\x\m\v\0\z\6\l\z\z\1\5\r\1\x\h\1\u\o\8\a\4\q\8\n\e\v\y\q\o\9\6\k\u\y\d\0\s\9\i\w\p\h\e\r\h\9\t\q\s\s\y\y\p\1\v\x\2\7\9\z\c\v\o\g\k\s\o\t\w\a\z\5\2\p\d\d\6\s\1\z\7\o\6\t\d\u\k\k\z\h\k\o\3\q\m\w\0\6\m\1\w\7\k\3\z\2\g\6\v\d\b\v\7\y\4\8\3\f\2\a\8\r\h\t\z\3\3\g\q\9\d\g\n\7\8\g\m\5\j\h\q\l\v\v\r\l\0\2\i\q\2\2\l\4\y\h\b\i\v\k\t\m\l\1\a\9\o\0\z\u\d\9\v\o\c\a\w\k\u\1\9\r\r\h\j\8\l\k\7\s\p\e\1\f\r\t\0\l\9\g\w\n\m\0\c\i\8\b\3\g\6\n\u\r\a\d\m\c\b\f\d\3\3\d\v\9\7\y\n\6\j\u\d\e\m\9\a\k\v\i\0\v\f\s\4\s\9\j\y\i\m\o\p\8\1\m\t\9\6\f\l\4\w\p\r\x\a\c\8\y\u\r\j\q\p\2\s\3\i\p\l\v\z\s\n\n\2\r\p\t\i\e\z\m\4\m ]] 00:09:36.325 02:58:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:36.325 02:58:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:36.592 [2024-07-13 02:58:42.871319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:36.592 [2024-07-13 02:58:42.871475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65586 ] 00:09:36.592 [2024-07-13 02:58:43.040560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.852 [2024-07-13 02:58:43.202348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.110 [2024-07-13 02:58:43.355796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.045  Copying: 512/512 [B] (average 250 kBps) 00:09:38.045 00:09:38.045 02:58:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ffwvhmhju10ru095ott133hnfqfosab3rbx3erufmzjnaweqrb0epz3zp9p31lu6pjmb0ql4g0xo94pzewshrdbpenutepb97jddgy0nxm9e8lhgu43vm9y1l3i96xxfn3eslednh1s89xpcaccg4qqnuu4fucekdx9u7auw9f51ty5bewcpckj8c8dskj397op0j29k6gu6wb2krzj9qn8xmv0z6lzz15r1xh1uo8a4q8nevyqo96kuyd0s9iwpherh9tqssyyp1vx279zcvogksotwaz52pdd6s1z7o6tdukkzhko3qmw06m1w7k3z2g6vdbv7y483f2a8rhtz33gq9dgn78gm5jhqlvvrl02iq22l4yhbivktml1a9o0zud9vocawku19rrhj8lk7spe1frt0l9gwnm0ci8b3g6nuradmcbfd33dv97yn6judem9akvi0vfs4s9jyimop81mt96fl4wprxac8yurjqp2s3iplvzsnn2rptiezm4m == \8\f\f\w\v\h\m\h\j\u\1\0\r\u\0\9\5\o\t\t\1\3\3\h\n\f\q\f\o\s\a\b\3\r\b\x\3\e\r\u\f\m\z\j\n\a\w\e\q\r\b\0\e\p\z\3\z\p\9\p\3\1\l\u\6\p\j\m\b\0\q\l\4\g\0\x\o\9\4\p\z\e\w\s\h\r\d\b\p\e\n\u\t\e\p\b\9\7\j\d\d\g\y\0\n\x\m\9\e\8\l\h\g\u\4\3\v\m\9\y\1\l\3\i\9\6\x\x\f\n\3\e\s\l\e\d\n\h\1\s\8\9\x\p\c\a\c\c\g\4\q\q\n\u\u\4\f\u\c\e\k\d\x\9\u\7\a\u\w\9\f\5\1\t\y\5\b\e\w\c\p\c\k\j\8\c\8\d\s\k\j\3\9\7\o\p\0\j\2\9\k\6\g\u\6\w\b\2\k\r\z\j\9\q\n\8\x\m\v\0\z\6\l\z\z\1\5\r\1\x\h\1\u\o\8\a\4\q\8\n\e\v\y\q\o\9\6\k\u\y\d\0\s\9\i\w\p\h\e\r\h\9\t\q\s\s\y\y\p\1\v\x\2\7\9\z\c\v\o\g\k\s\o\t\w\a\z\5\2\p\d\d\6\s\1\z\7\o\6\t\d\u\k\k\z\h\k\o\3\q\m\w\0\6\m\1\w\7\k\3\z\2\g\6\v\d\b\v\7\y\4\8\3\f\2\a\8\r\h\t\z\3\3\g\q\9\d\g\n\7\8\g\m\5\j\h\q\l\v\v\r\l\0\2\i\q\2\2\l\4\y\h\b\i\v\k\t\m\l\1\a\9\o\0\z\u\d\9\v\o\c\a\w\k\u\1\9\r\r\h\j\8\l\k\7\s\p\e\1\f\r\t\0\l\9\g\w\n\m\0\c\i\8\b\3\g\6\n\u\r\a\d\m\c\b\f\d\3\3\d\v\9\7\y\n\6\j\u\d\e\m\9\a\k\v\i\0\v\f\s\4\s\9\j\y\i\m\o\p\8\1\m\t\9\6\f\l\4\w\p\r\x\a\c\8\y\u\r\j\q\p\2\s\3\i\p\l\v\z\s\n\n\2\r\p\t\i\e\z\m\4\m ]] 00:09:38.045 02:58:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:38.045 02:58:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:38.045 [2024-07-13 02:58:44.511495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:38.045 [2024-07-13 02:58:44.511657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65605 ] 00:09:38.304 [2024-07-13 02:58:44.681071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.563 [2024-07-13 02:58:44.835957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.563 [2024-07-13 02:58:44.991618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.755  Copying: 512/512 [B] (average 166 kBps) 00:09:39.755 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8ffwvhmhju10ru095ott133hnfqfosab3rbx3erufmzjnaweqrb0epz3zp9p31lu6pjmb0ql4g0xo94pzewshrdbpenutepb97jddgy0nxm9e8lhgu43vm9y1l3i96xxfn3eslednh1s89xpcaccg4qqnuu4fucekdx9u7auw9f51ty5bewcpckj8c8dskj397op0j29k6gu6wb2krzj9qn8xmv0z6lzz15r1xh1uo8a4q8nevyqo96kuyd0s9iwpherh9tqssyyp1vx279zcvogksotwaz52pdd6s1z7o6tdukkzhko3qmw06m1w7k3z2g6vdbv7y483f2a8rhtz33gq9dgn78gm5jhqlvvrl02iq22l4yhbivktml1a9o0zud9vocawku19rrhj8lk7spe1frt0l9gwnm0ci8b3g6nuradmcbfd33dv97yn6judem9akvi0vfs4s9jyimop81mt96fl4wprxac8yurjqp2s3iplvzsnn2rptiezm4m == \8\f\f\w\v\h\m\h\j\u\1\0\r\u\0\9\5\o\t\t\1\3\3\h\n\f\q\f\o\s\a\b\3\r\b\x\3\e\r\u\f\m\z\j\n\a\w\e\q\r\b\0\e\p\z\3\z\p\9\p\3\1\l\u\6\p\j\m\b\0\q\l\4\g\0\x\o\9\4\p\z\e\w\s\h\r\d\b\p\e\n\u\t\e\p\b\9\7\j\d\d\g\y\0\n\x\m\9\e\8\l\h\g\u\4\3\v\m\9\y\1\l\3\i\9\6\x\x\f\n\3\e\s\l\e\d\n\h\1\s\8\9\x\p\c\a\c\c\g\4\q\q\n\u\u\4\f\u\c\e\k\d\x\9\u\7\a\u\w\9\f\5\1\t\y\5\b\e\w\c\p\c\k\j\8\c\8\d\s\k\j\3\9\7\o\p\0\j\2\9\k\6\g\u\6\w\b\2\k\r\z\j\9\q\n\8\x\m\v\0\z\6\l\z\z\1\5\r\1\x\h\1\u\o\8\a\4\q\8\n\e\v\y\q\o\9\6\k\u\y\d\0\s\9\i\w\p\h\e\r\h\9\t\q\s\s\y\y\p\1\v\x\2\7\9\z\c\v\o\g\k\s\o\t\w\a\z\5\2\p\d\d\6\s\1\z\7\o\6\t\d\u\k\k\z\h\k\o\3\q\m\w\0\6\m\1\w\7\k\3\z\2\g\6\v\d\b\v\7\y\4\8\3\f\2\a\8\r\h\t\z\3\3\g\q\9\d\g\n\7\8\g\m\5\j\h\q\l\v\v\r\l\0\2\i\q\2\2\l\4\y\h\b\i\v\k\t\m\l\1\a\9\o\0\z\u\d\9\v\o\c\a\w\k\u\1\9\r\r\h\j\8\l\k\7\s\p\e\1\f\r\t\0\l\9\g\w\n\m\0\c\i\8\b\3\g\6\n\u\r\a\d\m\c\b\f\d\3\3\d\v\9\7\y\n\6\j\u\d\e\m\9\a\k\v\i\0\v\f\s\4\s\9\j\y\i\m\o\p\8\1\m\t\9\6\f\l\4\w\p\r\x\a\c\8\y\u\r\j\q\p\2\s\3\i\p\l\v\z\s\n\n\2\r\p\t\i\e\z\m\4\m ]] 00:09:39.755 00:09:39.755 real 0m13.467s 00:09:39.755 user 0m10.981s 00:09:39.755 sys 0m1.495s 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.755 ************************************ 00:09:39.755 END TEST dd_flags_misc_forced_aio 00:09:39.755 ************************************ 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:39.755 ************************************ 00:09:39.755 END TEST spdk_dd_posix 00:09:39.755 ************************************ 00:09:39.755 00:09:39.755 real 0m56.927s 00:09:39.755 user 0m44.658s 00:09:39.755 sys 0m14.005s 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.755 02:58:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 02:58:46 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:39.755 02:58:46 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:39.755 02:58:46 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:39.755 02:58:46 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.755 02:58:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:39.755 ************************************ 00:09:39.755 START TEST spdk_dd_malloc 00:09:39.755 ************************************ 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:39.755 * Looking for test storage... 00:09:39.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.755 02:58:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:39.756 ************************************ 00:09:39.756 START TEST dd_malloc_copy 00:09:39.756 ************************************ 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:39.756 02:58:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:40.014 { 00:09:40.014 "subsystems": [ 00:09:40.014 { 00:09:40.014 "subsystem": "bdev", 00:09:40.014 "config": [ 00:09:40.015 { 00:09:40.015 "params": { 00:09:40.015 "block_size": 512, 00:09:40.015 "num_blocks": 1048576, 00:09:40.015 "name": "malloc0" 00:09:40.015 }, 00:09:40.015 "method": "bdev_malloc_create" 00:09:40.015 }, 00:09:40.015 { 00:09:40.015 "params": { 00:09:40.015 "block_size": 512, 00:09:40.015 "num_blocks": 1048576, 00:09:40.015 "name": "malloc1" 00:09:40.015 }, 00:09:40.015 "method": "bdev_malloc_create" 00:09:40.015 }, 00:09:40.015 { 00:09:40.015 "method": "bdev_wait_for_examine" 00:09:40.015 } 00:09:40.015 ] 00:09:40.015 } 00:09:40.015 ] 00:09:40.015 } 00:09:40.015 [2024-07-13 02:58:46.344432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:40.015 [2024-07-13 02:58:46.344602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65686 ] 00:09:40.273 [2024-07-13 02:58:46.516717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.273 [2024-07-13 02:58:46.747503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.532 [2024-07-13 02:58:46.929856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.111  Copying: 157/512 [MB] (157 MBps) Copying: 316/512 [MB] (159 MBps) Copying: 476/512 [MB] (159 MBps) Copying: 512/512 [MB] (average 159 MBps) 00:09:48.111 00:09:48.111 02:58:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:48.111 02:58:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:48.111 02:58:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:48.111 02:58:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:48.111 { 00:09:48.111 "subsystems": [ 00:09:48.111 { 00:09:48.111 "subsystem": "bdev", 00:09:48.111 "config": [ 00:09:48.111 { 00:09:48.111 "params": { 00:09:48.111 "block_size": 512, 00:09:48.111 "num_blocks": 1048576, 00:09:48.111 "name": "malloc0" 00:09:48.111 }, 00:09:48.111 "method": "bdev_malloc_create" 00:09:48.111 }, 00:09:48.111 { 00:09:48.111 "params": { 00:09:48.111 "block_size": 512, 00:09:48.111 "num_blocks": 1048576, 00:09:48.111 "name": "malloc1" 00:09:48.111 }, 00:09:48.111 "method": "bdev_malloc_create" 00:09:48.111 }, 00:09:48.111 { 00:09:48.111 "method": "bdev_wait_for_examine" 00:09:48.111 } 00:09:48.111 ] 00:09:48.111 } 00:09:48.111 ] 00:09:48.111 } 00:09:48.111 [2024-07-13 02:58:54.321486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:48.111 [2024-07-13 02:58:54.321648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65784 ] 00:09:48.111 [2024-07-13 02:58:54.491843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.371 [2024-07-13 02:58:54.722719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.630 [2024-07-13 02:58:54.888246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.713  Copying: 184/512 [MB] (184 MBps) Copying: 367/512 [MB] (183 MBps) Copying: 512/512 [MB] (average 183 MBps) 00:09:55.713 00:09:55.713 00:09:55.713 real 0m15.395s 00:09:55.713 user 0m14.351s 00:09:55.713 sys 0m0.845s 00:09:55.713 02:59:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.713 ************************************ 00:09:55.713 END TEST dd_malloc_copy 00:09:55.713 ************************************ 00:09:55.713 02:59:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:55.713 02:59:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:09:55.713 ************************************ 00:09:55.713 END TEST spdk_dd_malloc 00:09:55.713 ************************************ 00:09:55.713 00:09:55.713 real 0m15.544s 00:09:55.713 user 0m14.397s 00:09:55.713 sys 0m0.947s 00:09:55.713 02:59:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.713 02:59:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:55.713 02:59:01 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:55.714 02:59:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:55.714 02:59:01 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:55.714 02:59:01 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.714 02:59:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:55.714 ************************************ 00:09:55.714 START TEST spdk_dd_bdev_to_bdev 00:09:55.714 ************************************ 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:55.714 * Looking for test storage... 00:09:55.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:55.714 ************************************ 00:09:55.714 START TEST dd_inflate_file 00:09:55.714 ************************************ 00:09:55.714 02:59:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:55.714 [2024-07-13 02:59:01.928651] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:55.714 [2024-07-13 02:59:01.929033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65928 ] 00:09:55.714 [2024-07-13 02:59:02.097335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.972 [2024-07-13 02:59:02.297247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.230 [2024-07-13 02:59:02.498984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:57.605  Copying: 64/64 [MB] (average 1684 MBps) 00:09:57.605 00:09:57.605 00:09:57.605 ************************************ 00:09:57.605 END TEST dd_inflate_file 00:09:57.605 ************************************ 00:09:57.605 real 0m1.893s 00:09:57.605 user 0m1.574s 00:09:57.605 sys 0m0.985s 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 ************************************ 00:09:57.605 START TEST dd_copy_to_out_bdev 00:09:57.605 ************************************ 00:09:57.605 02:59:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:57.605 { 00:09:57.605 "subsystems": [ 00:09:57.605 { 00:09:57.605 "subsystem": "bdev", 00:09:57.605 "config": [ 00:09:57.605 { 00:09:57.605 "params": { 00:09:57.605 "trtype": "pcie", 00:09:57.605 "traddr": "0000:00:10.0", 00:09:57.605 "name": "Nvme0" 00:09:57.605 }, 00:09:57.605 "method": "bdev_nvme_attach_controller" 00:09:57.605 }, 00:09:57.605 { 00:09:57.605 "params": { 00:09:57.605 "trtype": "pcie", 00:09:57.605 "traddr": "0000:00:11.0", 00:09:57.605 "name": "Nvme1" 00:09:57.605 }, 00:09:57.605 "method": "bdev_nvme_attach_controller" 00:09:57.605 }, 00:09:57.605 { 00:09:57.605 "method": "bdev_wait_for_examine" 00:09:57.605 } 00:09:57.605 ] 00:09:57.605 } 00:09:57.605 ] 00:09:57.605 } 00:09:57.605 [2024-07-13 02:59:03.880430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:57.605 [2024-07-13 02:59:03.880785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65976 ] 00:09:57.605 [2024-07-13 02:59:04.049370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.863 [2024-07-13 02:59:04.199651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.121 [2024-07-13 02:59:04.374263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.693  Copying: 43/64 [MB] (43 MBps) Copying: 64/64 [MB] (average 43 MBps) 00:10:00.693 00:10:00.693 ************************************ 00:10:00.693 END TEST dd_copy_to_out_bdev 00:10:00.694 ************************************ 00:10:00.694 00:10:00.694 real 0m3.306s 00:10:00.694 user 0m3.008s 00:10:00.694 sys 0m2.350s 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:00.694 ************************************ 00:10:00.694 START TEST dd_offset_magic 00:10:00.694 ************************************ 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:00.694 02:59:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 { 00:10:00.952 "subsystems": [ 00:10:00.952 { 00:10:00.952 "subsystem": "bdev", 00:10:00.952 "config": [ 00:10:00.952 { 00:10:00.952 "params": { 00:10:00.952 "trtype": "pcie", 00:10:00.952 "traddr": "0000:00:10.0", 00:10:00.952 "name": "Nvme0" 00:10:00.952 }, 00:10:00.952 "method": "bdev_nvme_attach_controller" 00:10:00.952 }, 00:10:00.952 { 00:10:00.952 "params": { 00:10:00.952 "trtype": "pcie", 00:10:00.952 "traddr": "0000:00:11.0", 00:10:00.952 "name": "Nvme1" 00:10:00.952 }, 00:10:00.952 "method": "bdev_nvme_attach_controller" 00:10:00.952 }, 00:10:00.952 { 00:10:00.952 "method": "bdev_wait_for_examine" 00:10:00.952 } 00:10:00.952 ] 00:10:00.952 } 00:10:00.952 ] 00:10:00.952 } 00:10:00.952 [2024-07-13 02:59:07.246017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:00.952 [2024-07-13 02:59:07.246422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66041 ] 00:10:00.952 [2024-07-13 02:59:07.420528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.211 [2024-07-13 02:59:07.576237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.470 [2024-07-13 02:59:07.728045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:02.666  Copying: 65/65 [MB] (average 942 MBps) 00:10:02.666 00:10:02.666 02:59:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:10:02.666 02:59:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:02.666 02:59:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:02.666 02:59:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:02.666 { 00:10:02.666 "subsystems": [ 00:10:02.666 { 00:10:02.666 "subsystem": "bdev", 00:10:02.666 "config": [ 00:10:02.666 { 00:10:02.666 "params": { 00:10:02.666 "trtype": "pcie", 00:10:02.666 "traddr": "0000:00:10.0", 00:10:02.666 "name": "Nvme0" 00:10:02.666 }, 00:10:02.666 "method": "bdev_nvme_attach_controller" 00:10:02.666 }, 00:10:02.666 { 00:10:02.666 "params": { 00:10:02.666 "trtype": "pcie", 00:10:02.666 "traddr": "0000:00:11.0", 00:10:02.666 "name": "Nvme1" 00:10:02.666 }, 00:10:02.666 "method": "bdev_nvme_attach_controller" 00:10:02.666 }, 00:10:02.666 { 00:10:02.666 "method": "bdev_wait_for_examine" 00:10:02.666 } 00:10:02.666 ] 00:10:02.666 } 00:10:02.666 ] 00:10:02.666 } 00:10:02.666 [2024-07-13 02:59:08.944645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:02.666 [2024-07-13 02:59:08.944781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66064 ] 00:10:02.666 [2024-07-13 02:59:09.091672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.926 [2024-07-13 02:59:09.239635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.926 [2024-07-13 02:59:09.400945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.561  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:04.561 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:04.561 02:59:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:04.561 { 00:10:04.561 "subsystems": [ 00:10:04.561 { 00:10:04.561 "subsystem": "bdev", 00:10:04.561 "config": [ 00:10:04.561 { 00:10:04.561 "params": { 00:10:04.561 "trtype": "pcie", 00:10:04.561 "traddr": "0000:00:10.0", 00:10:04.561 "name": "Nvme0" 00:10:04.561 }, 00:10:04.561 "method": "bdev_nvme_attach_controller" 00:10:04.561 }, 00:10:04.561 { 00:10:04.561 "params": { 00:10:04.561 "trtype": "pcie", 00:10:04.561 "traddr": "0000:00:11.0", 00:10:04.561 "name": "Nvme1" 00:10:04.561 }, 00:10:04.561 "method": "bdev_nvme_attach_controller" 00:10:04.561 }, 00:10:04.561 { 00:10:04.561 "method": "bdev_wait_for_examine" 00:10:04.561 } 00:10:04.561 ] 00:10:04.561 } 00:10:04.561 ] 00:10:04.561 } 00:10:04.561 [2024-07-13 02:59:10.784559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:04.561 [2024-07-13 02:59:10.784725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66098 ] 00:10:04.561 [2024-07-13 02:59:10.956788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.819 [2024-07-13 02:59:11.184333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.078 [2024-07-13 02:59:11.391761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.272  Copying: 65/65 [MB] (average 1065 MBps) 00:10:06.272 00:10:06.272 02:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:10:06.272 02:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:06.272 02:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:06.272 02:59:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:06.272 { 00:10:06.272 "subsystems": [ 00:10:06.272 { 00:10:06.272 "subsystem": "bdev", 00:10:06.272 "config": [ 00:10:06.272 { 00:10:06.272 "params": { 00:10:06.272 "trtype": "pcie", 00:10:06.272 "traddr": "0000:00:10.0", 00:10:06.272 "name": "Nvme0" 00:10:06.272 }, 00:10:06.272 "method": "bdev_nvme_attach_controller" 00:10:06.272 }, 00:10:06.272 { 00:10:06.273 "params": { 00:10:06.273 "trtype": "pcie", 00:10:06.273 "traddr": "0000:00:11.0", 00:10:06.273 "name": "Nvme1" 00:10:06.273 }, 00:10:06.273 "method": "bdev_nvme_attach_controller" 00:10:06.273 }, 00:10:06.273 { 00:10:06.273 "method": "bdev_wait_for_examine" 00:10:06.273 } 00:10:06.273 ] 00:10:06.273 } 00:10:06.273 ] 00:10:06.273 } 00:10:06.273 [2024-07-13 02:59:12.571602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:06.273 [2024-07-13 02:59:12.571768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66125 ] 00:10:06.273 [2024-07-13 02:59:12.735834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.530 [2024-07-13 02:59:12.883160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.788 [2024-07-13 02:59:13.024202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:07.758  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:07.758 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:07.758 ************************************ 00:10:07.758 END TEST dd_offset_magic 00:10:07.758 ************************************ 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:07.758 00:10:07.758 real 0m7.035s 00:10:07.758 user 0m5.969s 00:10:07.758 sys 0m2.111s 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:07.758 02:59:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 { 00:10:08.017 "subsystems": [ 00:10:08.017 { 00:10:08.017 "subsystem": "bdev", 00:10:08.017 "config": [ 00:10:08.017 { 00:10:08.017 "params": { 00:10:08.017 "trtype": "pcie", 00:10:08.017 "traddr": "0000:00:10.0", 00:10:08.017 "name": "Nvme0" 00:10:08.017 }, 00:10:08.017 "method": "bdev_nvme_attach_controller" 00:10:08.017 }, 00:10:08.017 { 00:10:08.017 "params": { 00:10:08.017 "trtype": "pcie", 00:10:08.017 "traddr": "0000:00:11.0", 00:10:08.017 "name": "Nvme1" 00:10:08.017 }, 00:10:08.017 "method": "bdev_nvme_attach_controller" 00:10:08.017 }, 00:10:08.017 { 00:10:08.017 "method": "bdev_wait_for_examine" 00:10:08.017 } 00:10:08.017 ] 00:10:08.017 } 00:10:08.017 ] 00:10:08.017 } 00:10:08.017 [2024-07-13 02:59:14.320844] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:08.017 [2024-07-13 02:59:14.321033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66174 ] 00:10:08.017 [2024-07-13 02:59:14.489811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.275 [2024-07-13 02:59:14.640107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.534 [2024-07-13 02:59:14.788984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.470  Copying: 5120/5120 [kB] (average 1250 MBps) 00:10:09.470 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:09.470 02:59:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:09.470 { 00:10:09.470 "subsystems": [ 00:10:09.470 { 00:10:09.470 "subsystem": "bdev", 00:10:09.470 "config": [ 00:10:09.470 { 00:10:09.470 "params": { 00:10:09.470 "trtype": "pcie", 00:10:09.470 "traddr": "0000:00:10.0", 00:10:09.470 "name": "Nvme0" 00:10:09.470 }, 00:10:09.470 "method": "bdev_nvme_attach_controller" 00:10:09.470 }, 00:10:09.470 { 00:10:09.470 "params": { 00:10:09.470 "trtype": "pcie", 00:10:09.470 "traddr": "0000:00:11.0", 00:10:09.470 "name": "Nvme1" 00:10:09.470 }, 00:10:09.470 "method": "bdev_nvme_attach_controller" 00:10:09.470 }, 00:10:09.470 { 00:10:09.470 "method": "bdev_wait_for_examine" 00:10:09.470 } 00:10:09.470 ] 00:10:09.470 } 00:10:09.470 ] 00:10:09.470 } 00:10:09.470 [2024-07-13 02:59:15.867582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:09.470 [2024-07-13 02:59:15.867731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66196 ] 00:10:09.729 [2024-07-13 02:59:16.036561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.729 [2024-07-13 02:59:16.217085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.988 [2024-07-13 02:59:16.392648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.182  Copying: 5120/5120 [kB] (average 833 MBps) 00:10:11.182 00:10:11.441 02:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:10:11.441 ************************************ 00:10:11.441 END TEST spdk_dd_bdev_to_bdev 00:10:11.441 ************************************ 00:10:11.441 00:10:11.441 real 0m15.971s 00:10:11.441 user 0m13.584s 00:10:11.441 sys 0m7.095s 00:10:11.441 02:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.441 02:59:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:11.441 02:59:17 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:11.441 02:59:17 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:10:11.441 02:59:17 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:11.441 02:59:17 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.441 02:59:17 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.441 02:59:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:11.441 ************************************ 00:10:11.441 START TEST spdk_dd_uring 00:10:11.441 ************************************ 00:10:11.441 02:59:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:11.441 * Looking for test storage... 00:10:11.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:11.441 02:59:17 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.441 02:59:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.441 02:59:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:11.442 ************************************ 00:10:11.442 START TEST dd_uring_copy 00:10:11.442 ************************************ 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=uno4bjqncyu19cde1p0l0h7mryca1axq8e75bb34pn4ksok1bc8m3i381sxe8hr7a8tpwnj16tusm06f4gbnwqnpk25jwv9571cr38e9p6m1onxh03wsp7a9uecyf02m5asaa11j7sf4m55vxafts6h8nh0l84rutag67hviaag9phmp3jnq9dtmra9ucxrxo8ew5st9jyxginpt7o0zok8s7ktkqcl1lvhrzozlbevschvdgz643pqysf092w9d1wsrs84wfg0rq6i8wdlpa0he1ckxeboeii6eduw0anlf17ts64tlm684n5pdq6ocokfvn2ql8mylnh4srqo4ud5vcxc8wvnnup0ypavvf876ktrsvrptc3lpkg87nrmuinrzrgvm61j0zmwewbed02tk4tdhlepd1hu0cu6yv5f17meou8iislal81wvfnzpeth4ssd2rj7edjr16pkie1gf3ya62qpcrpzih4ltuoznpisu6mxwre5zrhs93oarw1pwjv63c5bpkx3n7qtjg9w05h1majvugpu7ude3vtr5ci29nkqztumdzehwbohnhelv60e44v0mz3bfcro3px74kuvos22lzqniilrxzngz63o9zjixwsspgf5ohzy6lwdomm0d1xefiic78qbccekvfmaonamjiyvbmv9k4lh7f1isogkc0xsa5z7ldpxwt7emjujk8nuk7j3g8ff8cg3fdx7tz1q3fmtni5raelt6juj0m9toag3hunzfy3z7nfvlfon9jb1mnfw2xztxwuri7wftmf21f1uj8p1nh6jvgohdk3z3z6stg66f2kewlz1lq5i81jlq0r8ylomrvphtryh0b7vo4vt44hl1hs193nci9pf8oy4saxlaofnnlgvcp34uz9zdnsdc5eiw1tx14qlhu2fpzfqepu4twfq87pg11preoxtpn5pb8qqw5s2gia46gezpcjt8qrdr88rccn7ch49gkac7w0m9ifho2rqnp3zjtqkeyn42vyjr 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo uno4bjqncyu19cde1p0l0h7mryca1axq8e75bb34pn4ksok1bc8m3i381sxe8hr7a8tpwnj16tusm06f4gbnwqnpk25jwv9571cr38e9p6m1onxh03wsp7a9uecyf02m5asaa11j7sf4m55vxafts6h8nh0l84rutag67hviaag9phmp3jnq9dtmra9ucxrxo8ew5st9jyxginpt7o0zok8s7ktkqcl1lvhrzozlbevschvdgz643pqysf092w9d1wsrs84wfg0rq6i8wdlpa0he1ckxeboeii6eduw0anlf17ts64tlm684n5pdq6ocokfvn2ql8mylnh4srqo4ud5vcxc8wvnnup0ypavvf876ktrsvrptc3lpkg87nrmuinrzrgvm61j0zmwewbed02tk4tdhlepd1hu0cu6yv5f17meou8iislal81wvfnzpeth4ssd2rj7edjr16pkie1gf3ya62qpcrpzih4ltuoznpisu6mxwre5zrhs93oarw1pwjv63c5bpkx3n7qtjg9w05h1majvugpu7ude3vtr5ci29nkqztumdzehwbohnhelv60e44v0mz3bfcro3px74kuvos22lzqniilrxzngz63o9zjixwsspgf5ohzy6lwdomm0d1xefiic78qbccekvfmaonamjiyvbmv9k4lh7f1isogkc0xsa5z7ldpxwt7emjujk8nuk7j3g8ff8cg3fdx7tz1q3fmtni5raelt6juj0m9toag3hunzfy3z7nfvlfon9jb1mnfw2xztxwuri7wftmf21f1uj8p1nh6jvgohdk3z3z6stg66f2kewlz1lq5i81jlq0r8ylomrvphtryh0b7vo4vt44hl1hs193nci9pf8oy4saxlaofnnlgvcp34uz9zdnsdc5eiw1tx14qlhu2fpzfqepu4twfq87pg11preoxtpn5pb8qqw5s2gia46gezpcjt8qrdr88rccn7ch49gkac7w0m9ifho2rqnp3zjtqkeyn42vyjr 00:10:11.442 02:59:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:10:11.702 [2024-07-13 02:59:18.045928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:11.702 [2024-07-13 02:59:18.046121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66278 ] 00:10:11.962 [2024-07-13 02:59:18.218889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.962 [2024-07-13 02:59:18.369205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.221 [2024-07-13 02:59:18.523299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:15.061  Copying: 511/511 [MB] (average 1395 MBps) 00:10:15.061 00:10:15.061 02:59:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:10:15.061 02:59:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:10:15.061 02:59:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:15.061 02:59:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:15.319 { 00:10:15.319 "subsystems": [ 00:10:15.319 { 00:10:15.319 "subsystem": "bdev", 00:10:15.319 "config": [ 00:10:15.319 { 00:10:15.319 "params": { 00:10:15.319 "block_size": 512, 00:10:15.319 "num_blocks": 1048576, 00:10:15.319 "name": "malloc0" 00:10:15.319 }, 00:10:15.319 "method": "bdev_malloc_create" 00:10:15.319 }, 00:10:15.319 { 00:10:15.319 "params": { 00:10:15.319 "filename": "/dev/zram1", 00:10:15.319 "name": "uring0" 00:10:15.319 }, 00:10:15.319 "method": "bdev_uring_create" 00:10:15.319 }, 00:10:15.319 { 00:10:15.319 "method": "bdev_wait_for_examine" 00:10:15.319 } 00:10:15.319 ] 00:10:15.319 } 00:10:15.319 ] 00:10:15.319 } 00:10:15.319 [2024-07-13 02:59:21.616614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:15.319 [2024-07-13 02:59:21.616798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66328 ] 00:10:15.319 [2024-07-13 02:59:21.786172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.578 [2024-07-13 02:59:21.979255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.837 [2024-07-13 02:59:22.168026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:21.803  Copying: 175/512 [MB] (175 MBps) Copying: 362/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 185 MBps) 00:10:21.803 00:10:21.803 02:59:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:10:21.803 02:59:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:10:21.803 02:59:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:21.803 02:59:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:21.803 { 00:10:21.803 "subsystems": [ 00:10:21.803 { 00:10:21.803 "subsystem": "bdev", 00:10:21.803 "config": [ 00:10:21.803 { 00:10:21.803 "params": { 00:10:21.803 "block_size": 512, 00:10:21.803 "num_blocks": 1048576, 00:10:21.803 "name": "malloc0" 00:10:21.803 }, 00:10:21.803 "method": "bdev_malloc_create" 00:10:21.803 }, 00:10:21.804 { 00:10:21.804 "params": { 00:10:21.804 "filename": "/dev/zram1", 00:10:21.804 "name": "uring0" 00:10:21.804 }, 00:10:21.804 "method": "bdev_uring_create" 00:10:21.804 }, 00:10:21.804 { 00:10:21.804 "method": "bdev_wait_for_examine" 00:10:21.804 } 00:10:21.804 ] 00:10:21.804 } 00:10:21.804 ] 00:10:21.804 } 00:10:21.804 [2024-07-13 02:59:27.927776] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:21.804 [2024-07-13 02:59:27.928002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66401 ] 00:10:21.804 [2024-07-13 02:59:28.097421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.073 [2024-07-13 02:59:28.324743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.073 [2024-07-13 02:59:28.511638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:28.943  Copying: 132/512 [MB] (132 MBps) Copying: 275/512 [MB] (143 MBps) Copying: 426/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 138 MBps) 00:10:28.943 00:10:28.943 02:59:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:10:28.943 02:59:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ uno4bjqncyu19cde1p0l0h7mryca1axq8e75bb34pn4ksok1bc8m3i381sxe8hr7a8tpwnj16tusm06f4gbnwqnpk25jwv9571cr38e9p6m1onxh03wsp7a9uecyf02m5asaa11j7sf4m55vxafts6h8nh0l84rutag67hviaag9phmp3jnq9dtmra9ucxrxo8ew5st9jyxginpt7o0zok8s7ktkqcl1lvhrzozlbevschvdgz643pqysf092w9d1wsrs84wfg0rq6i8wdlpa0he1ckxeboeii6eduw0anlf17ts64tlm684n5pdq6ocokfvn2ql8mylnh4srqo4ud5vcxc8wvnnup0ypavvf876ktrsvrptc3lpkg87nrmuinrzrgvm61j0zmwewbed02tk4tdhlepd1hu0cu6yv5f17meou8iislal81wvfnzpeth4ssd2rj7edjr16pkie1gf3ya62qpcrpzih4ltuoznpisu6mxwre5zrhs93oarw1pwjv63c5bpkx3n7qtjg9w05h1majvugpu7ude3vtr5ci29nkqztumdzehwbohnhelv60e44v0mz3bfcro3px74kuvos22lzqniilrxzngz63o9zjixwsspgf5ohzy6lwdomm0d1xefiic78qbccekvfmaonamjiyvbmv9k4lh7f1isogkc0xsa5z7ldpxwt7emjujk8nuk7j3g8ff8cg3fdx7tz1q3fmtni5raelt6juj0m9toag3hunzfy3z7nfvlfon9jb1mnfw2xztxwuri7wftmf21f1uj8p1nh6jvgohdk3z3z6stg66f2kewlz1lq5i81jlq0r8ylomrvphtryh0b7vo4vt44hl1hs193nci9pf8oy4saxlaofnnlgvcp34uz9zdnsdc5eiw1tx14qlhu2fpzfqepu4twfq87pg11preoxtpn5pb8qqw5s2gia46gezpcjt8qrdr88rccn7ch49gkac7w0m9ifho2rqnp3zjtqkeyn42vyjr == \u\n\o\4\b\j\q\n\c\y\u\1\9\c\d\e\1\p\0\l\0\h\7\m\r\y\c\a\1\a\x\q\8\e\7\5\b\b\3\4\p\n\4\k\s\o\k\1\b\c\8\m\3\i\3\8\1\s\x\e\8\h\r\7\a\8\t\p\w\n\j\1\6\t\u\s\m\0\6\f\4\g\b\n\w\q\n\p\k\2\5\j\w\v\9\5\7\1\c\r\3\8\e\9\p\6\m\1\o\n\x\h\0\3\w\s\p\7\a\9\u\e\c\y\f\0\2\m\5\a\s\a\a\1\1\j\7\s\f\4\m\5\5\v\x\a\f\t\s\6\h\8\n\h\0\l\8\4\r\u\t\a\g\6\7\h\v\i\a\a\g\9\p\h\m\p\3\j\n\q\9\d\t\m\r\a\9\u\c\x\r\x\o\8\e\w\5\s\t\9\j\y\x\g\i\n\p\t\7\o\0\z\o\k\8\s\7\k\t\k\q\c\l\1\l\v\h\r\z\o\z\l\b\e\v\s\c\h\v\d\g\z\6\4\3\p\q\y\s\f\0\9\2\w\9\d\1\w\s\r\s\8\4\w\f\g\0\r\q\6\i\8\w\d\l\p\a\0\h\e\1\c\k\x\e\b\o\e\i\i\6\e\d\u\w\0\a\n\l\f\1\7\t\s\6\4\t\l\m\6\8\4\n\5\p\d\q\6\o\c\o\k\f\v\n\2\q\l\8\m\y\l\n\h\4\s\r\q\o\4\u\d\5\v\c\x\c\8\w\v\n\n\u\p\0\y\p\a\v\v\f\8\7\6\k\t\r\s\v\r\p\t\c\3\l\p\k\g\8\7\n\r\m\u\i\n\r\z\r\g\v\m\6\1\j\0\z\m\w\e\w\b\e\d\0\2\t\k\4\t\d\h\l\e\p\d\1\h\u\0\c\u\6\y\v\5\f\1\7\m\e\o\u\8\i\i\s\l\a\l\8\1\w\v\f\n\z\p\e\t\h\4\s\s\d\2\r\j\7\e\d\j\r\1\6\p\k\i\e\1\g\f\3\y\a\6\2\q\p\c\r\p\z\i\h\4\l\t\u\o\z\n\p\i\s\u\6\m\x\w\r\e\5\z\r\h\s\9\3\o\a\r\w\1\p\w\j\v\6\3\c\5\b\p\k\x\3\n\7\q\t\j\g\9\w\0\5\h\1\m\a\j\v\u\g\p\u\7\u\d\e\3\v\t\r\5\c\i\2\9\n\k\q\z\t\u\m\d\z\e\h\w\b\o\h\n\h\e\l\v\6\0\e\4\4\v\0\m\z\3\b\f\c\r\o\3\p\x\7\4\k\u\v\o\s\2\2\l\z\q\n\i\i\l\r\x\z\n\g\z\6\3\o\9\z\j\i\x\w\s\s\p\g\f\5\o\h\z\y\6\l\w\d\o\m\m\0\d\1\x\e\f\i\i\c\7\8\q\b\c\c\e\k\v\f\m\a\o\n\a\m\j\i\y\v\b\m\v\9\k\4\l\h\7\f\1\i\s\o\g\k\c\0\x\s\a\5\z\7\l\d\p\x\w\t\7\e\m\j\u\j\k\8\n\u\k\7\j\3\g\8\f\f\8\c\g\3\f\d\x\7\t\z\1\q\3\f\m\t\n\i\5\r\a\e\l\t\6\j\u\j\0\m\9\t\o\a\g\3\h\u\n\z\f\y\3\z\7\n\f\v\l\f\o\n\9\j\b\1\m\n\f\w\2\x\z\t\x\w\u\r\i\7\w\f\t\m\f\2\1\f\1\u\j\8\p\1\n\h\6\j\v\g\o\h\d\k\3\z\3\z\6\s\t\g\6\6\f\2\k\e\w\l\z\1\l\q\5\i\8\1\j\l\q\0\r\8\y\l\o\m\r\v\p\h\t\r\y\h\0\b\7\v\o\4\v\t\4\4\h\l\1\h\s\1\9\3\n\c\i\9\p\f\8\o\y\4\s\a\x\l\a\o\f\n\n\l\g\v\c\p\3\4\u\z\9\z\d\n\s\d\c\5\e\i\w\1\t\x\1\4\q\l\h\u\2\f\p\z\f\q\e\p\u\4\t\w\f\q\8\7\p\g\1\1\p\r\e\o\x\t\p\n\5\p\b\8\q\q\w\5\s\2\g\i\a\4\6\g\e\z\p\c\j\t\8\q\r\d\r\8\8\r\c\c\n\7\c\h\4\9\g\k\a\c\7\w\0\m\9\i\f\h\o\2\r\q\n\p\3\z\j\t\q\k\e\y\n\4\2\v\y\j\r ]] 00:10:28.943 02:59:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:10:28.943 02:59:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ uno4bjqncyu19cde1p0l0h7mryca1axq8e75bb34pn4ksok1bc8m3i381sxe8hr7a8tpwnj16tusm06f4gbnwqnpk25jwv9571cr38e9p6m1onxh03wsp7a9uecyf02m5asaa11j7sf4m55vxafts6h8nh0l84rutag67hviaag9phmp3jnq9dtmra9ucxrxo8ew5st9jyxginpt7o0zok8s7ktkqcl1lvhrzozlbevschvdgz643pqysf092w9d1wsrs84wfg0rq6i8wdlpa0he1ckxeboeii6eduw0anlf17ts64tlm684n5pdq6ocokfvn2ql8mylnh4srqo4ud5vcxc8wvnnup0ypavvf876ktrsvrptc3lpkg87nrmuinrzrgvm61j0zmwewbed02tk4tdhlepd1hu0cu6yv5f17meou8iislal81wvfnzpeth4ssd2rj7edjr16pkie1gf3ya62qpcrpzih4ltuoznpisu6mxwre5zrhs93oarw1pwjv63c5bpkx3n7qtjg9w05h1majvugpu7ude3vtr5ci29nkqztumdzehwbohnhelv60e44v0mz3bfcro3px74kuvos22lzqniilrxzngz63o9zjixwsspgf5ohzy6lwdomm0d1xefiic78qbccekvfmaonamjiyvbmv9k4lh7f1isogkc0xsa5z7ldpxwt7emjujk8nuk7j3g8ff8cg3fdx7tz1q3fmtni5raelt6juj0m9toag3hunzfy3z7nfvlfon9jb1mnfw2xztxwuri7wftmf21f1uj8p1nh6jvgohdk3z3z6stg66f2kewlz1lq5i81jlq0r8ylomrvphtryh0b7vo4vt44hl1hs193nci9pf8oy4saxlaofnnlgvcp34uz9zdnsdc5eiw1tx14qlhu2fpzfqepu4twfq87pg11preoxtpn5pb8qqw5s2gia46gezpcjt8qrdr88rccn7ch49gkac7w0m9ifho2rqnp3zjtqkeyn42vyjr == \u\n\o\4\b\j\q\n\c\y\u\1\9\c\d\e\1\p\0\l\0\h\7\m\r\y\c\a\1\a\x\q\8\e\7\5\b\b\3\4\p\n\4\k\s\o\k\1\b\c\8\m\3\i\3\8\1\s\x\e\8\h\r\7\a\8\t\p\w\n\j\1\6\t\u\s\m\0\6\f\4\g\b\n\w\q\n\p\k\2\5\j\w\v\9\5\7\1\c\r\3\8\e\9\p\6\m\1\o\n\x\h\0\3\w\s\p\7\a\9\u\e\c\y\f\0\2\m\5\a\s\a\a\1\1\j\7\s\f\4\m\5\5\v\x\a\f\t\s\6\h\8\n\h\0\l\8\4\r\u\t\a\g\6\7\h\v\i\a\a\g\9\p\h\m\p\3\j\n\q\9\d\t\m\r\a\9\u\c\x\r\x\o\8\e\w\5\s\t\9\j\y\x\g\i\n\p\t\7\o\0\z\o\k\8\s\7\k\t\k\q\c\l\1\l\v\h\r\z\o\z\l\b\e\v\s\c\h\v\d\g\z\6\4\3\p\q\y\s\f\0\9\2\w\9\d\1\w\s\r\s\8\4\w\f\g\0\r\q\6\i\8\w\d\l\p\a\0\h\e\1\c\k\x\e\b\o\e\i\i\6\e\d\u\w\0\a\n\l\f\1\7\t\s\6\4\t\l\m\6\8\4\n\5\p\d\q\6\o\c\o\k\f\v\n\2\q\l\8\m\y\l\n\h\4\s\r\q\o\4\u\d\5\v\c\x\c\8\w\v\n\n\u\p\0\y\p\a\v\v\f\8\7\6\k\t\r\s\v\r\p\t\c\3\l\p\k\g\8\7\n\r\m\u\i\n\r\z\r\g\v\m\6\1\j\0\z\m\w\e\w\b\e\d\0\2\t\k\4\t\d\h\l\e\p\d\1\h\u\0\c\u\6\y\v\5\f\1\7\m\e\o\u\8\i\i\s\l\a\l\8\1\w\v\f\n\z\p\e\t\h\4\s\s\d\2\r\j\7\e\d\j\r\1\6\p\k\i\e\1\g\f\3\y\a\6\2\q\p\c\r\p\z\i\h\4\l\t\u\o\z\n\p\i\s\u\6\m\x\w\r\e\5\z\r\h\s\9\3\o\a\r\w\1\p\w\j\v\6\3\c\5\b\p\k\x\3\n\7\q\t\j\g\9\w\0\5\h\1\m\a\j\v\u\g\p\u\7\u\d\e\3\v\t\r\5\c\i\2\9\n\k\q\z\t\u\m\d\z\e\h\w\b\o\h\n\h\e\l\v\6\0\e\4\4\v\0\m\z\3\b\f\c\r\o\3\p\x\7\4\k\u\v\o\s\2\2\l\z\q\n\i\i\l\r\x\z\n\g\z\6\3\o\9\z\j\i\x\w\s\s\p\g\f\5\o\h\z\y\6\l\w\d\o\m\m\0\d\1\x\e\f\i\i\c\7\8\q\b\c\c\e\k\v\f\m\a\o\n\a\m\j\i\y\v\b\m\v\9\k\4\l\h\7\f\1\i\s\o\g\k\c\0\x\s\a\5\z\7\l\d\p\x\w\t\7\e\m\j\u\j\k\8\n\u\k\7\j\3\g\8\f\f\8\c\g\3\f\d\x\7\t\z\1\q\3\f\m\t\n\i\5\r\a\e\l\t\6\j\u\j\0\m\9\t\o\a\g\3\h\u\n\z\f\y\3\z\7\n\f\v\l\f\o\n\9\j\b\1\m\n\f\w\2\x\z\t\x\w\u\r\i\7\w\f\t\m\f\2\1\f\1\u\j\8\p\1\n\h\6\j\v\g\o\h\d\k\3\z\3\z\6\s\t\g\6\6\f\2\k\e\w\l\z\1\l\q\5\i\8\1\j\l\q\0\r\8\y\l\o\m\r\v\p\h\t\r\y\h\0\b\7\v\o\4\v\t\4\4\h\l\1\h\s\1\9\3\n\c\i\9\p\f\8\o\y\4\s\a\x\l\a\o\f\n\n\l\g\v\c\p\3\4\u\z\9\z\d\n\s\d\c\5\e\i\w\1\t\x\1\4\q\l\h\u\2\f\p\z\f\q\e\p\u\4\t\w\f\q\8\7\p\g\1\1\p\r\e\o\x\t\p\n\5\p\b\8\q\q\w\5\s\2\g\i\a\4\6\g\e\z\p\c\j\t\8\q\r\d\r\8\8\r\c\c\n\7\c\h\4\9\g\k\a\c\7\w\0\m\9\i\f\h\o\2\r\q\n\p\3\z\j\t\q\k\e\y\n\4\2\v\y\j\r ]] 00:10:28.943 02:59:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:28.943 02:59:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:10:28.943 02:59:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:10:28.943 02:59:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:28.943 02:59:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 { 00:10:28.943 "subsystems": [ 00:10:28.943 { 00:10:28.943 "subsystem": "bdev", 00:10:28.943 "config": [ 00:10:28.943 { 00:10:28.943 "params": { 00:10:28.943 "block_size": 512, 00:10:28.943 "num_blocks": 1048576, 00:10:28.943 "name": "malloc0" 00:10:28.943 }, 00:10:28.943 "method": "bdev_malloc_create" 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "params": { 00:10:28.943 "filename": "/dev/zram1", 00:10:28.943 "name": "uring0" 00:10:28.943 }, 00:10:28.943 "method": "bdev_uring_create" 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "method": "bdev_wait_for_examine" 00:10:28.943 } 00:10:28.943 ] 00:10:28.943 } 00:10:28.943 ] 00:10:28.943 } 00:10:28.943 [2024-07-13 02:59:35.253149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:28.943 [2024-07-13 02:59:35.253284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66520 ] 00:10:28.943 [2024-07-13 02:59:35.409837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.201 [2024-07-13 02:59:35.572280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.460 [2024-07-13 02:59:35.739082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:35.620  Copying: 136/512 [MB] (136 MBps) Copying: 267/512 [MB] (130 MBps) Copying: 404/512 [MB] (136 MBps) Copying: 512/512 [MB] (average 134 MBps) 00:10:35.620 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:35.620 02:59:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:35.879 { 00:10:35.879 "subsystems": [ 00:10:35.879 { 00:10:35.879 "subsystem": "bdev", 00:10:35.879 "config": [ 00:10:35.879 { 00:10:35.879 "params": { 00:10:35.879 "block_size": 512, 00:10:35.879 "num_blocks": 1048576, 00:10:35.879 "name": "malloc0" 00:10:35.879 }, 00:10:35.879 "method": "bdev_malloc_create" 00:10:35.879 }, 00:10:35.879 { 00:10:35.879 "params": { 00:10:35.879 "filename": "/dev/zram1", 00:10:35.879 "name": "uring0" 00:10:35.879 }, 00:10:35.879 "method": "bdev_uring_create" 00:10:35.879 }, 00:10:35.879 { 00:10:35.879 "params": { 00:10:35.879 "name": "uring0" 00:10:35.879 }, 00:10:35.879 "method": "bdev_uring_delete" 00:10:35.879 }, 00:10:35.879 { 00:10:35.879 "method": "bdev_wait_for_examine" 00:10:35.879 } 00:10:35.879 ] 00:10:35.879 } 00:10:35.879 ] 00:10:35.879 } 00:10:35.879 [2024-07-13 02:59:42.161344] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:35.879 [2024-07-13 02:59:42.161546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66614 ] 00:10:35.879 [2024-07-13 02:59:42.332176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.137 [2024-07-13 02:59:42.491737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.396 [2024-07-13 02:59:42.646188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.865  Copying: 0/0 [B] (average 0 Bps) 00:10:38.865 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:38.865 02:59:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:38.865 { 00:10:38.865 "subsystems": [ 00:10:38.865 { 00:10:38.865 "subsystem": "bdev", 00:10:38.865 "config": [ 00:10:38.865 { 00:10:38.865 "params": { 00:10:38.865 "block_size": 512, 00:10:38.865 "num_blocks": 1048576, 00:10:38.865 "name": "malloc0" 00:10:38.865 }, 00:10:38.865 "method": "bdev_malloc_create" 00:10:38.865 }, 00:10:38.865 { 00:10:38.865 "params": { 00:10:38.865 "filename": "/dev/zram1", 00:10:38.865 "name": "uring0" 00:10:38.865 }, 00:10:38.865 "method": "bdev_uring_create" 00:10:38.865 }, 00:10:38.865 { 00:10:38.865 "params": { 00:10:38.865 "name": "uring0" 00:10:38.865 }, 00:10:38.865 "method": "bdev_uring_delete" 00:10:38.865 }, 00:10:38.865 { 00:10:38.865 "method": "bdev_wait_for_examine" 00:10:38.865 } 00:10:38.865 ] 00:10:38.865 } 00:10:38.865 ] 00:10:38.865 } 00:10:38.865 [2024-07-13 02:59:45.293493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:38.865 [2024-07-13 02:59:45.293720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66667 ] 00:10:39.125 [2024-07-13 02:59:45.460923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.385 [2024-07-13 02:59:45.632069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.385 [2024-07-13 02:59:45.790395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:39.953 [2024-07-13 02:59:46.310362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:39.953 [2024-07-13 02:59:46.310455] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:39.953 [2024-07-13 02:59:46.310475] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:10:39.953 [2024-07-13 02:59:46.310491] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:41.859 [2024-07-13 02:59:47.939155] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:10:41.859 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:42.119 00:10:42.119 real 0m30.718s 00:10:42.119 user 0m25.275s 00:10:42.119 sys 0m16.444s 00:10:42.119 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.119 ************************************ 00:10:42.119 END TEST dd_uring_copy 00:10:42.119 ************************************ 00:10:42.119 02:59:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:42.119 02:59:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:10:42.119 00:10:42.119 real 0m30.855s 00:10:42.119 user 0m25.331s 00:10:42.119 sys 0m16.525s 00:10:42.119 02:59:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.119 02:59:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:42.119 ************************************ 00:10:42.119 END TEST spdk_dd_uring 00:10:42.119 ************************************ 00:10:42.380 02:59:48 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:42.380 02:59:48 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:42.380 02:59:48 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:42.380 02:59:48 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.380 02:59:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:42.380 ************************************ 00:10:42.380 START TEST spdk_dd_sparse 00:10:42.380 ************************************ 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:42.380 * Looking for test storage... 00:10:42.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:42.380 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:42.381 1+0 records in 00:10:42.381 1+0 records out 00:10:42.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00658685 s, 637 MB/s 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:42.381 1+0 records in 00:10:42.381 1+0 records out 00:10:42.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00523746 s, 801 MB/s 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:42.381 1+0 records in 00:10:42.381 1+0 records out 00:10:42.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00678031 s, 619 MB/s 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:42.381 ************************************ 00:10:42.381 START TEST dd_sparse_file_to_file 00:10:42.381 ************************************ 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:42.381 02:59:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:42.381 { 00:10:42.381 "subsystems": [ 00:10:42.381 { 00:10:42.381 "subsystem": "bdev", 00:10:42.381 "config": [ 00:10:42.381 { 00:10:42.381 "params": { 00:10:42.381 "block_size": 4096, 00:10:42.381 "filename": "dd_sparse_aio_disk", 00:10:42.381 "name": "dd_aio" 00:10:42.381 }, 00:10:42.381 "method": "bdev_aio_create" 00:10:42.381 }, 00:10:42.381 { 00:10:42.381 "params": { 00:10:42.381 "lvs_name": "dd_lvstore", 00:10:42.381 "bdev_name": "dd_aio" 00:10:42.381 }, 00:10:42.381 "method": "bdev_lvol_create_lvstore" 00:10:42.381 }, 00:10:42.381 { 00:10:42.381 "method": "bdev_wait_for_examine" 00:10:42.381 } 00:10:42.381 ] 00:10:42.381 } 00:10:42.381 ] 00:10:42.381 } 00:10:42.655 [2024-07-13 02:59:48.891852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:42.655 [2024-07-13 02:59:48.892062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66779 ] 00:10:42.655 [2024-07-13 02:59:49.057487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.927 [2024-07-13 02:59:49.222376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.927 [2024-07-13 02:59:49.380741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:44.121  Copying: 12/36 [MB] (average 1090 MBps) 00:10:44.121 00:10:44.121 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:44.379 00:10:44.379 real 0m1.847s 00:10:44.379 user 0m1.518s 00:10:44.379 sys 0m0.910s 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.379 ************************************ 00:10:44.379 END TEST dd_sparse_file_to_file 00:10:44.379 ************************************ 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:44.379 ************************************ 00:10:44.379 START TEST dd_sparse_file_to_bdev 00:10:44.379 ************************************ 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:44.379 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:44.380 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:44.380 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:44.380 02:59:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:44.380 { 00:10:44.380 "subsystems": [ 00:10:44.380 { 00:10:44.380 "subsystem": "bdev", 00:10:44.380 "config": [ 00:10:44.380 { 00:10:44.380 "params": { 00:10:44.380 "block_size": 4096, 00:10:44.380 "filename": "dd_sparse_aio_disk", 00:10:44.380 "name": "dd_aio" 00:10:44.380 }, 00:10:44.380 "method": "bdev_aio_create" 00:10:44.380 }, 00:10:44.380 { 00:10:44.380 "params": { 00:10:44.380 "lvs_name": "dd_lvstore", 00:10:44.380 "lvol_name": "dd_lvol", 00:10:44.380 "size_in_mib": 36, 00:10:44.380 "thin_provision": true 00:10:44.380 }, 00:10:44.380 "method": "bdev_lvol_create" 00:10:44.380 }, 00:10:44.380 { 00:10:44.380 "method": "bdev_wait_for_examine" 00:10:44.380 } 00:10:44.380 ] 00:10:44.380 } 00:10:44.380 ] 00:10:44.380 } 00:10:44.380 [2024-07-13 02:59:50.785047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:44.380 [2024-07-13 02:59:50.785228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66835 ] 00:10:44.638 [2024-07-13 02:59:50.950580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.638 [2024-07-13 02:59:51.119502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.897 [2024-07-13 02:59:51.284836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:46.092  Copying: 12/36 [MB] (average 521 MBps) 00:10:46.092 00:10:46.092 00:10:46.092 real 0m1.818s 00:10:46.092 user 0m1.536s 00:10:46.092 sys 0m0.879s 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:46.092 ************************************ 00:10:46.092 END TEST dd_sparse_file_to_bdev 00:10:46.092 ************************************ 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:46.092 ************************************ 00:10:46.092 START TEST dd_sparse_bdev_to_file 00:10:46.092 ************************************ 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:46.092 02:59:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:46.351 { 00:10:46.351 "subsystems": [ 00:10:46.351 { 00:10:46.351 "subsystem": "bdev", 00:10:46.351 "config": [ 00:10:46.351 { 00:10:46.351 "params": { 00:10:46.351 "block_size": 4096, 00:10:46.351 "filename": "dd_sparse_aio_disk", 00:10:46.351 "name": "dd_aio" 00:10:46.351 }, 00:10:46.351 "method": "bdev_aio_create" 00:10:46.351 }, 00:10:46.351 { 00:10:46.351 "method": "bdev_wait_for_examine" 00:10:46.351 } 00:10:46.351 ] 00:10:46.351 } 00:10:46.351 ] 00:10:46.351 } 00:10:46.351 [2024-07-13 02:59:52.662232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:46.351 [2024-07-13 02:59:52.662399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66885 ] 00:10:46.351 [2024-07-13 02:59:52.832584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.613 [2024-07-13 02:59:53.003805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.871 [2024-07-13 02:59:53.174933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:48.247  Copying: 12/36 [MB] (average 1090 MBps) 00:10:48.247 00:10:48.247 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:48.247 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:48.247 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:48.247 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:48.248 00:10:48.248 real 0m1.850s 00:10:48.248 user 0m1.558s 00:10:48.248 sys 0m0.886s 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:48.248 ************************************ 00:10:48.248 END TEST dd_sparse_bdev_to_file 00:10:48.248 ************************************ 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:48.248 00:10:48.248 real 0m5.816s 00:10:48.248 user 0m4.714s 00:10:48.248 sys 0m2.857s 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.248 02:59:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:48.248 ************************************ 00:10:48.248 END TEST spdk_dd_sparse 00:10:48.248 ************************************ 00:10:48.248 02:59:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:48.248 02:59:54 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:48.248 02:59:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.248 02:59:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.248 02:59:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:48.248 ************************************ 00:10:48.248 START TEST spdk_dd_negative 00:10:48.248 ************************************ 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:48.248 * Looking for test storage... 00:10:48.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:48.248 ************************************ 00:10:48.248 START TEST dd_invalid_arguments 00:10:48.248 ************************************ 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:48.248 02:59:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:48.248 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:48.248 00:10:48.248 CPU options: 00:10:48.248 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:48.248 (like [0,1,10]) 00:10:48.248 --lcores lcore to CPU mapping list. The list is in the format: 00:10:48.248 [<,lcores[@CPUs]>...] 00:10:48.248 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:48.248 Within the group, '-' is used for range separator, 00:10:48.248 ',' is used for single number separator. 00:10:48.248 '( )' can be omitted for single element group, 00:10:48.248 '@' can be omitted if cpus and lcores have the same value 00:10:48.248 --disable-cpumask-locks Disable CPU core lock files. 00:10:48.248 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:48.248 pollers in the app support interrupt mode) 00:10:48.248 -p, --main-core main (primary) core for DPDK 00:10:48.248 00:10:48.248 Configuration options: 00:10:48.248 -c, --config, --json JSON config file 00:10:48.248 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:48.248 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:48.248 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:48.248 --rpcs-allowed comma-separated list of permitted RPCS 00:10:48.248 --json-ignore-init-errors don't exit on invalid config entry 00:10:48.248 00:10:48.248 Memory options: 00:10:48.248 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:48.248 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:48.248 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:48.248 -R, --huge-unlink unlink huge files after initialization 00:10:48.248 -n, --mem-channels number of memory channels used for DPDK 00:10:48.248 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:48.248 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:48.248 --no-huge run without using hugepages 00:10:48.248 -i, --shm-id shared memory ID (optional) 00:10:48.248 -g, --single-file-segments force creating just one hugetlbfs file 00:10:48.248 00:10:48.248 PCI options: 00:10:48.248 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:48.248 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:48.248 -u, --no-pci disable PCI access 00:10:48.248 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:48.248 00:10:48.248 Log options: 00:10:48.248 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:48.248 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:48.248 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:48.248 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:48.248 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:10:48.248 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:10:48.248 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:10:48.248 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:10:48.248 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:10:48.248 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:10:48.248 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:10:48.248 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:48.248 --silence-noticelog disable notice level logging to stderr 00:10:48.248 00:10:48.249 Trace options: 00:10:48.249 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:48.249 setting 0 to disable trace (default 32768) 00:10:48.249 Tracepoints vary in size and can use more than one trace entry. 00:10:48.249 -e, --tpoint-group [: 128 )) 00:10:48.508 02:59:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.508 02:59:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.508 00:10:48.508 real 0m0.162s 00:10:48.508 user 0m0.081s 00:10:48.508 sys 0m0.079s 00:10:48.508 02:59:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.508 02:59:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:48.508 ************************************ 00:10:48.508 END TEST dd_double_input 00:10:48.508 ************************************ 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 ************************************ 00:10:48.767 START TEST dd_double_output 00:10:48.767 ************************************ 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:48.767 [2024-07-13 02:59:55.145285] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:48.767 00:10:48.767 real 0m0.166s 00:10:48.767 user 0m0.087s 00:10:48.767 sys 0m0.076s 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:48.767 ************************************ 00:10:48.767 END TEST dd_double_output 00:10:48.767 ************************************ 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.767 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:49.026 ************************************ 00:10:49.026 START TEST dd_no_input 00:10:49.026 ************************************ 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:49.026 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:49.026 [2024-07-13 02:59:55.348333] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.027 00:10:49.027 real 0m0.138s 00:10:49.027 user 0m0.083s 00:10:49.027 sys 0m0.054s 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:49.027 ************************************ 00:10:49.027 END TEST dd_no_input 00:10:49.027 ************************************ 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:49.027 ************************************ 00:10:49.027 START TEST dd_no_output 00:10:49.027 ************************************ 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:49.027 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:49.285 [2024-07-13 02:59:55.556970] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.285 00:10:49.285 real 0m0.163s 00:10:49.285 user 0m0.091s 00:10:49.285 sys 0m0.070s 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 ************************************ 00:10:49.285 END TEST dd_no_output 00:10:49.285 ************************************ 00:10:49.285 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:49.286 ************************************ 00:10:49.286 START TEST dd_wrong_blocksize 00:10:49.286 ************************************ 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:49.286 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:49.286 [2024-07-13 02:59:55.767920] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.544 00:10:49.544 real 0m0.159s 00:10:49.544 user 0m0.093s 00:10:49.544 sys 0m0.064s 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:49.544 ************************************ 00:10:49.544 END TEST dd_wrong_blocksize 00:10:49.544 ************************************ 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:49.544 ************************************ 00:10:49.544 START TEST dd_smaller_blocksize 00:10:49.544 ************************************ 00:10:49.544 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:49.545 02:59:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:49.545 [2024-07-13 02:59:55.980130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:49.545 [2024-07-13 02:59:55.980312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67127 ] 00:10:49.803 [2024-07-13 02:59:56.153130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.060 [2024-07-13 02:59:56.377523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.060 [2024-07-13 02:59:56.542857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:50.627 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:50.627 [2024-07-13 02:59:56.896270] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:50.627 [2024-07-13 02:59:56.896392] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.194 [2024-07-13 02:59:57.498751] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.453 00:10:51.453 real 0m2.000s 00:10:51.453 user 0m1.461s 00:10:51.453 sys 0m0.426s 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:51.453 ************************************ 00:10:51.453 END TEST dd_smaller_blocksize 00:10:51.453 ************************************ 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.453 02:59:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:51.454 ************************************ 00:10:51.454 START TEST dd_invalid_count 00:10:51.454 ************************************ 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.454 02:59:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:51.712 [2024-07-13 02:59:58.010245] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.712 ************************************ 00:10:51.712 END TEST dd_invalid_count 00:10:51.712 ************************************ 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.712 00:10:51.712 real 0m0.129s 00:10:51.712 user 0m0.065s 00:10:51.712 sys 0m0.062s 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.712 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:51.712 ************************************ 00:10:51.712 START TEST dd_invalid_oflag 00:10:51.713 ************************************ 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.713 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:51.972 [2024-07-13 02:59:58.212589] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.972 00:10:51.972 real 0m0.159s 00:10:51.972 user 0m0.085s 00:10:51.972 sys 0m0.073s 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.972 ************************************ 00:10:51.972 END TEST dd_invalid_oflag 00:10:51.972 ************************************ 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:51.972 ************************************ 00:10:51.972 START TEST dd_invalid_iflag 00:10:51.972 ************************************ 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.972 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:51.972 [2024-07-13 02:59:58.422723] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.231 00:10:52.231 real 0m0.157s 00:10:52.231 user 0m0.082s 00:10:52.231 sys 0m0.073s 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.231 ************************************ 00:10:52.231 END TEST dd_invalid_iflag 00:10:52.231 ************************************ 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:52.231 ************************************ 00:10:52.231 START TEST dd_unknown_flag 00:10:52.231 ************************************ 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:52.231 02:59:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:52.231 [2024-07-13 02:59:58.634490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:52.231 [2024-07-13 02:59:58.634660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67240 ] 00:10:52.490 [2024-07-13 02:59:58.804341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.490 [2024-07-13 02:59:58.962234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.749 [2024-07-13 02:59:59.122267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.749 [2024-07-13 02:59:59.199398] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:52.749 [2024-07-13 02:59:59.199480] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.749 [2024-07-13 02:59:59.199552] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:52.749 [2024-07-13 02:59:59.199571] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.749 [2024-07-13 02:59:59.199825] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:52.749 [2024-07-13 02:59:59.199848] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.749 [2024-07-13 02:59:59.199937] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:52.749 [2024-07-13 02:59:59.199955] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:53.316 [2024-07-13 02:59:59.791050] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.884 00:10:53.884 real 0m1.657s 00:10:53.884 user 0m1.376s 00:10:53.884 sys 0m0.183s 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.884 ************************************ 00:10:53.884 END TEST dd_unknown_flag 00:10:53.884 ************************************ 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:53.884 ************************************ 00:10:53.884 START TEST dd_invalid_json 00:10:53.884 ************************************ 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:53.884 03:00:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:53.884 [2024-07-13 03:00:00.349220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:53.884 [2024-07-13 03:00:00.349388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67286 ] 00:10:54.142 [2024-07-13 03:00:00.521465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.401 [2024-07-13 03:00:00.684437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.401 [2024-07-13 03:00:00.684545] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:54.401 [2024-07-13 03:00:00.684576] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:54.401 [2024-07-13 03:00:00.684590] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:54.401 [2024-07-13 03:00:00.684661] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:10:54.663 ************************************ 00:10:54.663 END TEST dd_invalid_json 00:10:54.663 ************************************ 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.663 00:10:54.663 real 0m0.814s 00:10:54.663 user 0m0.585s 00:10:54.663 sys 0m0.125s 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:54.663 ************************************ 00:10:54.663 END TEST spdk_dd_negative 00:10:54.663 ************************************ 00:10:54.663 00:10:54.663 real 0m6.577s 00:10:54.663 user 0m4.404s 00:10:54.663 sys 0m1.782s 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.663 03:00:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:54.663 03:00:01 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:54.663 ************************************ 00:10:54.663 END TEST spdk_dd 00:10:54.663 ************************************ 00:10:54.663 00:10:54.663 real 2m52.249s 00:10:54.663 user 2m20.915s 00:10:54.663 sys 0m59.772s 00:10:54.663 03:00:01 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.663 03:00:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:54.922 03:00:01 -- common/autotest_common.sh@1142 -- # return 0 00:10:54.922 03:00:01 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:54.922 03:00:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.922 03:00:01 -- common/autotest_common.sh@10 -- # set +x 00:10:54.922 03:00:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:54.922 03:00:01 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:54.922 03:00:01 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:54.922 03:00:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.922 03:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.923 03:00:01 -- common/autotest_common.sh@10 -- # set +x 00:10:54.923 ************************************ 00:10:54.923 START TEST nvmf_tcp 00:10:54.923 ************************************ 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:54.923 * Looking for test storage... 00:10:54.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.923 03:00:01 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.923 03:00:01 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.923 03:00:01 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.923 03:00:01 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.923 03:00:01 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.923 03:00:01 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.923 03:00:01 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:54.923 03:00:01 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:10:54.923 03:00:01 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.923 03:00:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.923 ************************************ 00:10:54.923 START TEST nvmf_host_management 00:10:54.923 ************************************ 00:10:54.923 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:55.182 * Looking for test storage... 00:10:55.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.182 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:55.183 Cannot find device "nvmf_init_br" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:55.183 Cannot find device "nvmf_tgt_br" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.183 Cannot find device "nvmf_tgt_br2" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:55.183 Cannot find device "nvmf_init_br" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:55.183 Cannot find device "nvmf_tgt_br" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:55.183 Cannot find device "nvmf_tgt_br2" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:55.183 Cannot find device "nvmf_br" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:55.183 Cannot find device "nvmf_init_if" 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.183 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.440 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.440 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:55.440 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:55.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:55.441 00:10:55.441 --- 10.0.0.2 ping statistics --- 00:10:55.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.441 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:55.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:55.441 00:10:55.441 --- 10.0.0.3 ping statistics --- 00:10:55.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.441 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:55.441 00:10:55.441 --- 10.0.0.1 ping statistics --- 00:10:55.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.441 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.441 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.698 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67542 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67542 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67542 ']' 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.699 03:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:55.699 [2024-07-13 03:00:02.067614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:55.699 [2024-07-13 03:00:02.067762] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.957 [2024-07-13 03:00:02.234418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.215 [2024-07-13 03:00:02.473116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.215 [2024-07-13 03:00:02.473216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.215 [2024-07-13 03:00:02.473237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.215 [2024-07-13 03:00:02.473254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.215 [2024-07-13 03:00:02.473271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.215 [2024-07-13 03:00:02.473505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.215 [2024-07-13 03:00:02.474260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.215 [2024-07-13 03:00:02.474453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.215 [2024-07-13 03:00:02.474454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.215 [2024-07-13 03:00:02.682014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.472 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.472 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:56.472 03:00:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.472 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.472 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 03:00:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.731 03:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.731 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.731 03:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 [2024-07-13 03:00:02.972331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 Malloc0 00:10:56.731 [2024-07-13 03:00:03.105687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67606 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67606 /var/tmp/bdevperf.sock 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67606 ']' 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:56.731 { 00:10:56.731 "params": { 00:10:56.731 "name": "Nvme$subsystem", 00:10:56.731 "trtype": "$TEST_TRANSPORT", 00:10:56.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.731 "adrfam": "ipv4", 00:10:56.731 "trsvcid": "$NVMF_PORT", 00:10:56.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.731 "hdgst": ${hdgst:-false}, 00:10:56.731 "ddgst": ${ddgst:-false} 00:10:56.731 }, 00:10:56.731 "method": "bdev_nvme_attach_controller" 00:10:56.731 } 00:10:56.731 EOF 00:10:56.731 )") 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:56.731 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:56.732 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:56.732 03:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:56.732 "params": { 00:10:56.732 "name": "Nvme0", 00:10:56.732 "trtype": "tcp", 00:10:56.732 "traddr": "10.0.0.2", 00:10:56.732 "adrfam": "ipv4", 00:10:56.732 "trsvcid": "4420", 00:10:56.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:56.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:56.732 "hdgst": false, 00:10:56.732 "ddgst": false 00:10:56.732 }, 00:10:56.732 "method": "bdev_nvme_attach_controller" 00:10:56.732 }' 00:10:56.990 [2024-07-13 03:00:03.253358] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:56.990 [2024-07-13 03:00:03.253513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67606 ] 00:10:56.990 [2024-07-13 03:00:03.421563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.248 [2024-07-13 03:00:03.605745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.506 [2024-07-13 03:00:03.791626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:57.506 Running I/O for 10 seconds... 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.765 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.765 [2024-07-13 03:00:04.237567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.237811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.765 [2024-07-13 03:00:04.238507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.765 [2024-07-13 03:00:04.238521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.238975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.238995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.766 [2024-07-13 03:00:04.239193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:57.766 [2024-07-13 03:00:04.239351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.766 [2024-07-13 03:00:04.239499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.766 [2024-07-13 03:00:04.239602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.766 [2024-07-13 03:00:04.239750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.766 [2024-07-13 03:00:04.239766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.239985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.239999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.240028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.240060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.240090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.240119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:57.767 [2024-07-13 03:00:04.240148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:10:57.767 [2024-07-13 03:00:04.240429] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:10:57.767 [2024-07-13 03:00:04.240545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.767 [2024-07-13 03:00:04.240568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.767 [2024-07-13 03:00:04.240598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.767 [2024-07-13 03:00:04.240625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.767 [2024-07-13 03:00:04.240653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.767 [2024-07-13 03:00:04.240665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:10:57.767 [2024-07-13 03:00:04.242012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:57.767 task offset: 49024 on job bdev=Nvme0n1 fails 00:10:57.767 00:10:57.767 Latency(us) 00:10:57.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.767 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:57.767 Job: Nvme0n1 ended in about 0.27 seconds with error 00:10:57.767 Verification LBA range: start 0x0 length 0x400 00:10:57.767 Nvme0n1 : 0.27 1168.21 73.01 233.64 0.00 43543.89 8460.10 41466.41 00:10:57.767 =================================================================================================================== 00:10:57.767 Total : 1168.21 73.01 233.64 0.00 43543.89 8460.10 41466.41 00:10:57.767 03:00:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.767 [2024-07-13 03:00:04.247151] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:57.767 [2024-07-13 03:00:04.247199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:57.767 03:00:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:58.025 [2024-07-13 03:00:04.259419] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67606 00:10:58.959 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67606) - No such process 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:58.959 { 00:10:58.959 "params": { 00:10:58.959 "name": "Nvme$subsystem", 00:10:58.959 "trtype": "$TEST_TRANSPORT", 00:10:58.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.959 "adrfam": "ipv4", 00:10:58.959 "trsvcid": "$NVMF_PORT", 00:10:58.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.959 "hdgst": ${hdgst:-false}, 00:10:58.959 "ddgst": ${ddgst:-false} 00:10:58.959 }, 00:10:58.959 "method": "bdev_nvme_attach_controller" 00:10:58.959 } 00:10:58.959 EOF 00:10:58.959 )") 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:58.959 03:00:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:58.960 "params": { 00:10:58.960 "name": "Nvme0", 00:10:58.960 "trtype": "tcp", 00:10:58.960 "traddr": "10.0.0.2", 00:10:58.960 "adrfam": "ipv4", 00:10:58.960 "trsvcid": "4420", 00:10:58.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:58.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:58.960 "hdgst": false, 00:10:58.960 "ddgst": false 00:10:58.960 }, 00:10:58.960 "method": "bdev_nvme_attach_controller" 00:10:58.960 }' 00:10:58.960 [2024-07-13 03:00:05.363835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:58.960 [2024-07-13 03:00:05.364014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67641 ] 00:10:59.217 [2024-07-13 03:00:05.537335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.475 [2024-07-13 03:00:05.715161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.475 [2024-07-13 03:00:05.900841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:59.733 Running I/O for 1 seconds... 00:11:00.697 00:11:00.697 Latency(us) 00:11:00.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.697 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:00.697 Verification LBA range: start 0x0 length 0x400 00:11:00.697 Nvme0n1 : 1.03 1433.28 89.58 0.00 0.00 43833.63 5153.51 39321.60 00:11:00.697 =================================================================================================================== 00:11:00.697 Total : 1433.28 89.58 0.00 0.00 43833.63 5153.51 39321.60 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.069 rmmod nvme_tcp 00:11:02.069 rmmod nvme_fabrics 00:11:02.069 rmmod nvme_keyring 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67542 ']' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67542 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 67542 ']' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 67542 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67542 00:11:02.069 killing process with pid 67542 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67542' 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 67542 00:11:02.069 03:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 67542 00:11:03.002 [2024-07-13 03:00:09.421069] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:03.262 00:11:03.262 real 0m8.195s 00:11:03.262 user 0m31.525s 00:11:03.262 sys 0m1.559s 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.262 03:00:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.262 ************************************ 00:11:03.262 END TEST nvmf_host_management 00:11:03.262 ************************************ 00:11:03.262 03:00:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:03.262 03:00:09 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:03.262 03:00:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:03.262 03:00:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.262 03:00:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.262 ************************************ 00:11:03.262 START TEST nvmf_lvol 00:11:03.262 ************************************ 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:03.262 * Looking for test storage... 00:11:03.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:03.262 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:03.263 Cannot find device "nvmf_tgt_br" 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.263 Cannot find device "nvmf_tgt_br2" 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:11:03.263 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:03.522 Cannot find device "nvmf_tgt_br" 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:03.522 Cannot find device "nvmf_tgt_br2" 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.522 03:00:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.522 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.522 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.522 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:03.781 00:11:03.781 --- 10.0.0.2 ping statistics --- 00:11:03.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.781 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:11:03.781 00:11:03.781 --- 10.0.0.3 ping statistics --- 00:11:03.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.781 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:03.781 00:11:03.781 --- 10.0.0.1 ping statistics --- 00:11:03.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.781 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67876 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67876 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 67876 ']' 00:11:03.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.781 03:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:03.781 [2024-07-13 03:00:10.158592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:03.781 [2024-07-13 03:00:10.159320] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.040 [2024-07-13 03:00:10.319562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.040 [2024-07-13 03:00:10.484344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.040 [2024-07-13 03:00:10.484663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.040 [2024-07-13 03:00:10.484820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.040 [2024-07-13 03:00:10.484997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.040 [2024-07-13 03:00:10.485049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.040 [2024-07-13 03:00:10.485365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.040 [2024-07-13 03:00:10.485492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.040 [2024-07-13 03:00:10.485504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.299 [2024-07-13 03:00:10.657379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:04.564 03:00:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.564 03:00:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:04.565 03:00:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.565 03:00:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.565 03:00:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:04.823 03:00:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.823 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.082 [2024-07-13 03:00:11.333005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.082 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.340 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:05.340 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.598 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:05.598 03:00:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:05.857 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:06.116 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e25402a9-1fd0-4ce2-860e-e71d18b6de55 00:11:06.116 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e25402a9-1fd0-4ce2-860e-e71d18b6de55 lvol 20 00:11:06.374 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a813139f-9c7e-419b-a0ba-446eaee3a6b9 00:11:06.374 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:06.638 03:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a813139f-9c7e-419b-a0ba-446eaee3a6b9 00:11:06.902 03:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:07.159 [2024-07-13 03:00:13.439782] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.159 03:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.416 03:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67955 00:11:07.416 03:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:07.416 03:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:08.349 03:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a813139f-9c7e-419b-a0ba-446eaee3a6b9 MY_SNAPSHOT 00:11:08.607 03:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b20466ca-9042-4d65-b42e-bf8b28637ce4 00:11:08.607 03:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a813139f-9c7e-419b-a0ba-446eaee3a6b9 30 00:11:08.866 03:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b20466ca-9042-4d65-b42e-bf8b28637ce4 MY_CLONE 00:11:09.431 03:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9b853a01-949b-4a93-aada-1d29c11418d2 00:11:09.431 03:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9b853a01-949b-4a93-aada-1d29c11418d2 00:11:09.995 03:00:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67955 00:11:18.096 Initializing NVMe Controllers 00:11:18.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:18.096 Controller IO queue size 128, less than required. 00:11:18.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:18.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:18.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:18.096 Initialization complete. Launching workers. 00:11:18.096 ======================================================== 00:11:18.096 Latency(us) 00:11:18.096 Device Information : IOPS MiB/s Average min max 00:11:18.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9131.90 35.67 14019.88 300.14 164076.53 00:11:18.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8985.80 35.10 14244.23 5310.42 173613.06 00:11:18.096 ======================================================== 00:11:18.096 Total : 18117.69 70.77 14131.15 300.14 173613.06 00:11:18.096 00:11:18.096 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:18.096 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a813139f-9c7e-419b-a0ba-446eaee3a6b9 00:11:18.354 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e25402a9-1fd0-4ce2-860e-e71d18b6de55 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.612 03:00:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.612 rmmod nvme_tcp 00:11:18.612 rmmod nvme_fabrics 00:11:18.612 rmmod nvme_keyring 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67876 ']' 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67876 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 67876 ']' 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 67876 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67876 00:11:18.612 killing process with pid 67876 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67876' 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 67876 00:11:18.612 03:00:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 67876 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:20.026 ************************************ 00:11:20.026 END TEST nvmf_lvol 00:11:20.026 ************************************ 00:11:20.026 00:11:20.026 real 0m16.838s 00:11:20.026 user 1m7.947s 00:11:20.026 sys 0m3.988s 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:20.026 03:00:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:20.026 03:00:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:20.026 03:00:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.026 03:00:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.026 03:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.026 ************************************ 00:11:20.026 START TEST nvmf_lvs_grow 00:11:20.026 ************************************ 00:11:20.026 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:20.285 * Looking for test storage... 00:11:20.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:20.285 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:20.286 Cannot find device "nvmf_tgt_br" 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.286 Cannot find device "nvmf_tgt_br2" 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:20.286 Cannot find device "nvmf_tgt_br" 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:20.286 Cannot find device "nvmf_tgt_br2" 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.286 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.544 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:20.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:20.545 00:11:20.545 --- 10.0.0.2 ping statistics --- 00:11:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.545 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:20.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:20.545 00:11:20.545 --- 10.0.0.3 ping statistics --- 00:11:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.545 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:20.545 00:11:20.545 --- 10.0.0.1 ping statistics --- 00:11:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.545 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:20.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=68292 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 68292 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 68292 ']' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.545 03:00:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:20.803 [2024-07-13 03:00:27.099110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:20.803 [2024-07-13 03:00:27.099532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.803 [2024-07-13 03:00:27.271172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.062 [2024-07-13 03:00:27.499117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.062 [2024-07-13 03:00:27.499405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.062 [2024-07-13 03:00:27.499574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.062 [2024-07-13 03:00:27.499827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.062 [2024-07-13 03:00:27.499974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.062 [2024-07-13 03:00:27.500059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.320 [2024-07-13 03:00:27.670335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.578 03:00:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.578 03:00:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:21.578 03:00:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.578 03:00:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:21.578 03:00:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.578 03:00:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.578 03:00:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:21.836 [2024-07-13 03:00:28.265221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.836 ************************************ 00:11:21.836 START TEST lvs_grow_clean 00:11:21.836 ************************************ 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:21.836 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:21.837 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:21.837 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:21.837 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:21.837 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:22.094 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:22.094 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:22.352 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:22.352 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:22.352 03:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:22.611 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:22.611 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:22.611 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7724da72-788e-4a48-bced-4a7fb8e324fc lvol 150 00:11:22.869 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 00:11:22.869 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:22.869 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:23.128 [2024-07-13 03:00:29.508525] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:23.128 [2024-07-13 03:00:29.508628] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:23.128 true 00:11:23.128 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:23.128 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:23.386 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:23.386 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:23.644 03:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 00:11:23.903 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.161 [2024-07-13 03:00:30.449416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.161 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68377 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68377 /var/tmp/bdevperf.sock 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 68377 ']' 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.420 03:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:24.420 [2024-07-13 03:00:30.830756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:24.420 [2024-07-13 03:00:30.830935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68377 ] 00:11:24.680 [2024-07-13 03:00:31.011307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.945 [2024-07-13 03:00:31.240717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.945 [2024-07-13 03:00:31.431489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:25.511 03:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.511 03:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:25.511 03:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:25.768 Nvme0n1 00:11:25.769 03:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:26.026 [ 00:11:26.026 { 00:11:26.026 "name": "Nvme0n1", 00:11:26.026 "aliases": [ 00:11:26.026 "b2a6ebc1-3b7b-45af-98d7-5a5d8f831207" 00:11:26.026 ], 00:11:26.026 "product_name": "NVMe disk", 00:11:26.026 "block_size": 4096, 00:11:26.026 "num_blocks": 38912, 00:11:26.026 "uuid": "b2a6ebc1-3b7b-45af-98d7-5a5d8f831207", 00:11:26.026 "assigned_rate_limits": { 00:11:26.026 "rw_ios_per_sec": 0, 00:11:26.026 "rw_mbytes_per_sec": 0, 00:11:26.026 "r_mbytes_per_sec": 0, 00:11:26.026 "w_mbytes_per_sec": 0 00:11:26.026 }, 00:11:26.026 "claimed": false, 00:11:26.026 "zoned": false, 00:11:26.026 "supported_io_types": { 00:11:26.026 "read": true, 00:11:26.026 "write": true, 00:11:26.026 "unmap": true, 00:11:26.026 "flush": true, 00:11:26.026 "reset": true, 00:11:26.026 "nvme_admin": true, 00:11:26.026 "nvme_io": true, 00:11:26.026 "nvme_io_md": false, 00:11:26.026 "write_zeroes": true, 00:11:26.026 "zcopy": false, 00:11:26.026 "get_zone_info": false, 00:11:26.026 "zone_management": false, 00:11:26.026 "zone_append": false, 00:11:26.026 "compare": true, 00:11:26.026 "compare_and_write": true, 00:11:26.026 "abort": true, 00:11:26.026 "seek_hole": false, 00:11:26.026 "seek_data": false, 00:11:26.026 "copy": true, 00:11:26.026 "nvme_iov_md": false 00:11:26.026 }, 00:11:26.026 "memory_domains": [ 00:11:26.026 { 00:11:26.026 "dma_device_id": "system", 00:11:26.026 "dma_device_type": 1 00:11:26.026 } 00:11:26.026 ], 00:11:26.026 "driver_specific": { 00:11:26.026 "nvme": [ 00:11:26.026 { 00:11:26.026 "trid": { 00:11:26.026 "trtype": "TCP", 00:11:26.026 "adrfam": "IPv4", 00:11:26.026 "traddr": "10.0.0.2", 00:11:26.026 "trsvcid": "4420", 00:11:26.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:26.026 }, 00:11:26.026 "ctrlr_data": { 00:11:26.026 "cntlid": 1, 00:11:26.026 "vendor_id": "0x8086", 00:11:26.026 "model_number": "SPDK bdev Controller", 00:11:26.026 "serial_number": "SPDK0", 00:11:26.026 "firmware_revision": "24.09", 00:11:26.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:26.026 "oacs": { 00:11:26.026 "security": 0, 00:11:26.026 "format": 0, 00:11:26.026 "firmware": 0, 00:11:26.026 "ns_manage": 0 00:11:26.026 }, 00:11:26.026 "multi_ctrlr": true, 00:11:26.026 "ana_reporting": false 00:11:26.026 }, 00:11:26.026 "vs": { 00:11:26.026 "nvme_version": "1.3" 00:11:26.026 }, 00:11:26.026 "ns_data": { 00:11:26.026 "id": 1, 00:11:26.026 "can_share": true 00:11:26.026 } 00:11:26.026 } 00:11:26.026 ], 00:11:26.026 "mp_policy": "active_passive" 00:11:26.026 } 00:11:26.026 } 00:11:26.026 ] 00:11:26.026 03:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:26.026 03:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68399 00:11:26.026 03:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:26.026 Running I/O for 10 seconds... 00:11:27.423 Latency(us) 00:11:27.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.423 Nvme0n1 : 1.00 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:11:27.423 =================================================================================================================== 00:11:27.423 Total : 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:11:27.423 00:11:27.987 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:28.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.244 Nvme0n1 : 2.00 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:11:28.244 =================================================================================================================== 00:11:28.244 Total : 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:11:28.244 00:11:28.244 true 00:11:28.244 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:28.244 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:28.502 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:28.502 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:28.502 03:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68399 00:11:29.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.067 Nvme0n1 : 3.00 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:11:29.067 =================================================================================================================== 00:11:29.067 Total : 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:11:29.067 00:11:30.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.001 Nvme0n1 : 4.00 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:11:30.001 =================================================================================================================== 00:11:30.001 Total : 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:11:30.001 00:11:31.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.373 Nvme0n1 : 5.00 5867.40 22.92 0.00 0.00 0.00 0.00 0.00 00:11:31.373 =================================================================================================================== 00:11:31.373 Total : 5867.40 22.92 0.00 0.00 0.00 0.00 0.00 00:11:31.373 00:11:32.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.307 Nvme0n1 : 6.00 5863.17 22.90 0.00 0.00 0.00 0.00 0.00 00:11:32.307 =================================================================================================================== 00:11:32.307 Total : 5863.17 22.90 0.00 0.00 0.00 0.00 0.00 00:11:32.307 00:11:33.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.240 Nvme0n1 : 7.00 5914.57 23.10 0.00 0.00 0.00 0.00 0.00 00:11:33.240 =================================================================================================================== 00:11:33.240 Total : 5914.57 23.10 0.00 0.00 0.00 0.00 0.00 00:11:33.240 00:11:34.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.186 Nvme0n1 : 8.00 5834.62 22.79 0.00 0.00 0.00 0.00 0.00 00:11:34.186 =================================================================================================================== 00:11:34.186 Total : 5834.62 22.79 0.00 0.00 0.00 0.00 0.00 00:11:34.186 00:11:35.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.183 Nvme0n1 : 9.00 5807.22 22.68 0.00 0.00 0.00 0.00 0.00 00:11:35.183 =================================================================================================================== 00:11:35.183 Total : 5807.22 22.68 0.00 0.00 0.00 0.00 0.00 00:11:35.183 00:11:36.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.116 Nvme0n1 : 10.00 5798.00 22.65 0.00 0.00 0.00 0.00 0.00 00:11:36.116 =================================================================================================================== 00:11:36.116 Total : 5798.00 22.65 0.00 0.00 0.00 0.00 0.00 00:11:36.116 00:11:36.116 00:11:36.116 Latency(us) 00:11:36.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.116 Nvme0n1 : 10.02 5797.60 22.65 0.00 0.00 22071.80 6732.33 118203.11 00:11:36.116 =================================================================================================================== 00:11:36.116 Total : 5797.60 22.65 0.00 0.00 22071.80 6732.33 118203.11 00:11:36.116 0 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68377 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 68377 ']' 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 68377 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68377 00:11:36.116 killing process with pid 68377 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68377' 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 68377 00:11:36.116 Received shutdown signal, test time was about 10.000000 seconds 00:11:36.116 00:11:36.116 Latency(us) 00:11:36.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.116 =================================================================================================================== 00:11:36.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:36.116 03:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 68377 00:11:37.489 03:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:37.747 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:38.004 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:38.004 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:38.261 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:38.262 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:38.262 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:38.520 [2024-07-13 03:00:44.759883] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:38.520 03:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:38.778 request: 00:11:38.778 { 00:11:38.778 "uuid": "7724da72-788e-4a48-bced-4a7fb8e324fc", 00:11:38.778 "method": "bdev_lvol_get_lvstores", 00:11:38.778 "req_id": 1 00:11:38.778 } 00:11:38.778 Got JSON-RPC error response 00:11:38.778 response: 00:11:38.778 { 00:11:38.778 "code": -19, 00:11:38.778 "message": "No such device" 00:11:38.778 } 00:11:38.778 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:38.778 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:38.778 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:38.778 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:38.778 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:39.036 aio_bdev 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:39.036 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:39.293 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 -t 2000 00:11:39.551 [ 00:11:39.551 { 00:11:39.551 "name": "b2a6ebc1-3b7b-45af-98d7-5a5d8f831207", 00:11:39.551 "aliases": [ 00:11:39.551 "lvs/lvol" 00:11:39.551 ], 00:11:39.551 "product_name": "Logical Volume", 00:11:39.551 "block_size": 4096, 00:11:39.551 "num_blocks": 38912, 00:11:39.551 "uuid": "b2a6ebc1-3b7b-45af-98d7-5a5d8f831207", 00:11:39.551 "assigned_rate_limits": { 00:11:39.551 "rw_ios_per_sec": 0, 00:11:39.551 "rw_mbytes_per_sec": 0, 00:11:39.551 "r_mbytes_per_sec": 0, 00:11:39.551 "w_mbytes_per_sec": 0 00:11:39.551 }, 00:11:39.551 "claimed": false, 00:11:39.551 "zoned": false, 00:11:39.551 "supported_io_types": { 00:11:39.551 "read": true, 00:11:39.551 "write": true, 00:11:39.552 "unmap": true, 00:11:39.552 "flush": false, 00:11:39.552 "reset": true, 00:11:39.552 "nvme_admin": false, 00:11:39.552 "nvme_io": false, 00:11:39.552 "nvme_io_md": false, 00:11:39.552 "write_zeroes": true, 00:11:39.552 "zcopy": false, 00:11:39.552 "get_zone_info": false, 00:11:39.552 "zone_management": false, 00:11:39.552 "zone_append": false, 00:11:39.552 "compare": false, 00:11:39.552 "compare_and_write": false, 00:11:39.552 "abort": false, 00:11:39.552 "seek_hole": true, 00:11:39.552 "seek_data": true, 00:11:39.552 "copy": false, 00:11:39.552 "nvme_iov_md": false 00:11:39.552 }, 00:11:39.552 "driver_specific": { 00:11:39.552 "lvol": { 00:11:39.552 "lvol_store_uuid": "7724da72-788e-4a48-bced-4a7fb8e324fc", 00:11:39.552 "base_bdev": "aio_bdev", 00:11:39.552 "thin_provision": false, 00:11:39.552 "num_allocated_clusters": 38, 00:11:39.552 "snapshot": false, 00:11:39.552 "clone": false, 00:11:39.552 "esnap_clone": false 00:11:39.552 } 00:11:39.552 } 00:11:39.552 } 00:11:39.552 ] 00:11:39.552 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:39.552 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:39.552 03:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:39.809 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:39.809 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:39.809 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:40.067 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:40.067 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b2a6ebc1-3b7b-45af-98d7-5a5d8f831207 00:11:40.326 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7724da72-788e-4a48-bced-4a7fb8e324fc 00:11:40.584 03:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:40.842 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:41.100 ************************************ 00:11:41.100 END TEST lvs_grow_clean 00:11:41.100 ************************************ 00:11:41.100 00:11:41.100 real 0m19.163s 00:11:41.100 user 0m18.289s 00:11:41.100 sys 0m2.343s 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:41.100 ************************************ 00:11:41.100 START TEST lvs_grow_dirty 00:11:41.100 ************************************ 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:41.100 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.357 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:41.357 03:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:41.615 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:41.615 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:41.615 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:41.871 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:41.871 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:41.871 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 08c77a67-192f-4aec-874e-f0832a4b4afd lvol 150 00:11:42.129 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:42.129 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:42.129 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:42.387 [2024-07-13 03:00:48.768793] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:42.387 [2024-07-13 03:00:48.768927] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:42.387 true 00:11:42.388 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:42.388 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:42.645 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:42.645 03:00:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:42.902 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:43.159 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:43.159 [2024-07-13 03:00:49.605555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.159 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:43.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68658 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68658 /var/tmp/bdevperf.sock 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68658 ']' 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.417 03:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:43.417 [2024-07-13 03:00:49.901691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:43.417 [2024-07-13 03:00:49.902230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68658 ] 00:11:43.674 [2024-07-13 03:00:50.062327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.933 [2024-07-13 03:00:50.271655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.933 [2024-07-13 03:00:50.418176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:44.500 03:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.500 03:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:44.500 03:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:44.759 Nvme0n1 00:11:44.759 03:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:45.018 [ 00:11:45.018 { 00:11:45.018 "name": "Nvme0n1", 00:11:45.018 "aliases": [ 00:11:45.018 "4e9e3917-fc44-4b95-8bf6-1fad04d7347e" 00:11:45.018 ], 00:11:45.018 "product_name": "NVMe disk", 00:11:45.018 "block_size": 4096, 00:11:45.018 "num_blocks": 38912, 00:11:45.018 "uuid": "4e9e3917-fc44-4b95-8bf6-1fad04d7347e", 00:11:45.018 "assigned_rate_limits": { 00:11:45.018 "rw_ios_per_sec": 0, 00:11:45.018 "rw_mbytes_per_sec": 0, 00:11:45.018 "r_mbytes_per_sec": 0, 00:11:45.018 "w_mbytes_per_sec": 0 00:11:45.018 }, 00:11:45.018 "claimed": false, 00:11:45.018 "zoned": false, 00:11:45.018 "supported_io_types": { 00:11:45.018 "read": true, 00:11:45.018 "write": true, 00:11:45.018 "unmap": true, 00:11:45.018 "flush": true, 00:11:45.018 "reset": true, 00:11:45.018 "nvme_admin": true, 00:11:45.018 "nvme_io": true, 00:11:45.018 "nvme_io_md": false, 00:11:45.018 "write_zeroes": true, 00:11:45.018 "zcopy": false, 00:11:45.018 "get_zone_info": false, 00:11:45.018 "zone_management": false, 00:11:45.018 "zone_append": false, 00:11:45.018 "compare": true, 00:11:45.018 "compare_and_write": true, 00:11:45.018 "abort": true, 00:11:45.018 "seek_hole": false, 00:11:45.018 "seek_data": false, 00:11:45.018 "copy": true, 00:11:45.018 "nvme_iov_md": false 00:11:45.018 }, 00:11:45.018 "memory_domains": [ 00:11:45.018 { 00:11:45.018 "dma_device_id": "system", 00:11:45.018 "dma_device_type": 1 00:11:45.018 } 00:11:45.018 ], 00:11:45.018 "driver_specific": { 00:11:45.018 "nvme": [ 00:11:45.018 { 00:11:45.018 "trid": { 00:11:45.018 "trtype": "TCP", 00:11:45.018 "adrfam": "IPv4", 00:11:45.018 "traddr": "10.0.0.2", 00:11:45.018 "trsvcid": "4420", 00:11:45.018 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:45.018 }, 00:11:45.018 "ctrlr_data": { 00:11:45.018 "cntlid": 1, 00:11:45.018 "vendor_id": "0x8086", 00:11:45.018 "model_number": "SPDK bdev Controller", 00:11:45.018 "serial_number": "SPDK0", 00:11:45.018 "firmware_revision": "24.09", 00:11:45.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:45.018 "oacs": { 00:11:45.018 "security": 0, 00:11:45.018 "format": 0, 00:11:45.018 "firmware": 0, 00:11:45.018 "ns_manage": 0 00:11:45.018 }, 00:11:45.018 "multi_ctrlr": true, 00:11:45.018 "ana_reporting": false 00:11:45.018 }, 00:11:45.018 "vs": { 00:11:45.018 "nvme_version": "1.3" 00:11:45.018 }, 00:11:45.018 "ns_data": { 00:11:45.018 "id": 1, 00:11:45.018 "can_share": true 00:11:45.018 } 00:11:45.018 } 00:11:45.018 ], 00:11:45.018 "mp_policy": "active_passive" 00:11:45.018 } 00:11:45.018 } 00:11:45.018 ] 00:11:45.018 03:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68682 00:11:45.018 03:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.018 03:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:45.018 Running I/O for 10 seconds... 00:11:45.952 Latency(us) 00:11:45.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.952 Nvme0n1 : 1.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:45.952 =================================================================================================================== 00:11:45.952 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:45.952 00:11:46.885 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:47.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.143 Nvme0n1 : 2.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:47.143 =================================================================================================================== 00:11:47.143 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:47.143 00:11:47.143 true 00:11:47.143 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:47.143 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:47.709 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:47.709 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:47.709 03:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68682 00:11:47.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.966 Nvme0n1 : 3.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:47.966 =================================================================================================================== 00:11:47.966 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:11:47.966 00:11:49.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.356 Nvme0n1 : 4.00 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:11:49.356 =================================================================================================================== 00:11:49.356 Total : 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:11:49.356 00:11:49.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.922 Nvme0n1 : 5.00 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:11:49.922 =================================================================================================================== 00:11:49.922 Total : 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:11:49.922 00:11:51.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.297 Nvme0n1 : 6.00 5718.67 22.34 0.00 0.00 0.00 0.00 0.00 00:11:51.297 =================================================================================================================== 00:11:51.297 Total : 5718.67 22.34 0.00 0.00 0.00 0.00 0.00 00:11:51.297 00:11:52.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.234 Nvme0n1 : 7.00 5700.00 22.27 0.00 0.00 0.00 0.00 0.00 00:11:52.234 =================================================================================================================== 00:11:52.234 Total : 5700.00 22.27 0.00 0.00 0.00 0.00 0.00 00:11:52.234 00:11:53.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.170 Nvme0n1 : 8.00 5686.00 22.21 0.00 0.00 0.00 0.00 0.00 00:11:53.170 =================================================================================================================== 00:11:53.170 Total : 5686.00 22.21 0.00 0.00 0.00 0.00 0.00 00:11:53.170 00:11:54.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.111 Nvme0n1 : 9.00 5689.22 22.22 0.00 0.00 0.00 0.00 0.00 00:11:54.111 =================================================================================================================== 00:11:54.111 Total : 5689.22 22.22 0.00 0.00 0.00 0.00 0.00 00:11:54.111 00:11:55.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.046 Nvme0n1 : 10.00 5691.80 22.23 0.00 0.00 0.00 0.00 0.00 00:11:55.046 =================================================================================================================== 00:11:55.046 Total : 5691.80 22.23 0.00 0.00 0.00 0.00 0.00 00:11:55.046 00:11:55.046 00:11:55.046 Latency(us) 00:11:55.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.046 Nvme0n1 : 10.02 5694.01 22.24 0.00 0.00 22471.95 14834.97 91988.71 00:11:55.046 =================================================================================================================== 00:11:55.046 Total : 5694.01 22.24 0.00 0.00 22471.95 14834.97 91988.71 00:11:55.046 0 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68658 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 68658 ']' 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 68658 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68658 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:55.046 killing process with pid 68658 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68658' 00:11:55.046 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.046 00:11:55.046 Latency(us) 00:11:55.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.046 =================================================================================================================== 00:11:55.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 68658 00:11:55.046 03:01:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 68658 00:11:56.422 03:01:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.422 03:01:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:56.680 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:56.680 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68292 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68292 00:11:56.938 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68292 Killed "${NVMF_APP[@]}" "$@" 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68828 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68828 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68828 ']' 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.938 03:01:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:57.197 [2024-07-13 03:01:03.475330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:57.197 [2024-07-13 03:01:03.475464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.197 [2024-07-13 03:01:03.640284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.455 [2024-07-13 03:01:03.800868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.455 [2024-07-13 03:01:03.800989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.455 [2024-07-13 03:01:03.801006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.455 [2024-07-13 03:01:03.801020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.455 [2024-07-13 03:01:03.801030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.455 [2024-07-13 03:01:03.801068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.713 [2024-07-13 03:01:03.959456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:57.972 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.973 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.231 [2024-07-13 03:01:04.629129] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:58.231 [2024-07-13 03:01:04.630147] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:58.231 [2024-07-13 03:01:04.630705] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.231 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:58.488 03:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e9e3917-fc44-4b95-8bf6-1fad04d7347e -t 2000 00:11:58.745 [ 00:11:58.745 { 00:11:58.745 "name": "4e9e3917-fc44-4b95-8bf6-1fad04d7347e", 00:11:58.745 "aliases": [ 00:11:58.745 "lvs/lvol" 00:11:58.745 ], 00:11:58.745 "product_name": "Logical Volume", 00:11:58.745 "block_size": 4096, 00:11:58.745 "num_blocks": 38912, 00:11:58.745 "uuid": "4e9e3917-fc44-4b95-8bf6-1fad04d7347e", 00:11:58.745 "assigned_rate_limits": { 00:11:58.745 "rw_ios_per_sec": 0, 00:11:58.745 "rw_mbytes_per_sec": 0, 00:11:58.745 "r_mbytes_per_sec": 0, 00:11:58.745 "w_mbytes_per_sec": 0 00:11:58.745 }, 00:11:58.745 "claimed": false, 00:11:58.745 "zoned": false, 00:11:58.745 "supported_io_types": { 00:11:58.745 "read": true, 00:11:58.745 "write": true, 00:11:58.745 "unmap": true, 00:11:58.745 "flush": false, 00:11:58.745 "reset": true, 00:11:58.745 "nvme_admin": false, 00:11:58.745 "nvme_io": false, 00:11:58.745 "nvme_io_md": false, 00:11:58.745 "write_zeroes": true, 00:11:58.745 "zcopy": false, 00:11:58.745 "get_zone_info": false, 00:11:58.745 "zone_management": false, 00:11:58.745 "zone_append": false, 00:11:58.745 "compare": false, 00:11:58.745 "compare_and_write": false, 00:11:58.745 "abort": false, 00:11:58.745 "seek_hole": true, 00:11:58.745 "seek_data": true, 00:11:58.745 "copy": false, 00:11:58.745 "nvme_iov_md": false 00:11:58.745 }, 00:11:58.745 "driver_specific": { 00:11:58.745 "lvol": { 00:11:58.745 "lvol_store_uuid": "08c77a67-192f-4aec-874e-f0832a4b4afd", 00:11:58.745 "base_bdev": "aio_bdev", 00:11:58.745 "thin_provision": false, 00:11:58.745 "num_allocated_clusters": 38, 00:11:58.745 "snapshot": false, 00:11:58.745 "clone": false, 00:11:58.745 "esnap_clone": false 00:11:58.745 } 00:11:58.745 } 00:11:58.745 } 00:11:58.745 ] 00:11:58.745 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:58.745 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:58.745 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:59.003 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:59.003 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:59.003 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:59.267 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:59.267 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:59.526 [2024-07-13 03:01:05.794747] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:59.526 03:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:11:59.785 request: 00:11:59.785 { 00:11:59.785 "uuid": "08c77a67-192f-4aec-874e-f0832a4b4afd", 00:11:59.785 "method": "bdev_lvol_get_lvstores", 00:11:59.785 "req_id": 1 00:11:59.785 } 00:11:59.785 Got JSON-RPC error response 00:11:59.785 response: 00:11:59.785 { 00:11:59.785 "code": -19, 00:11:59.785 "message": "No such device" 00:11:59.785 } 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:59.785 aio_bdev 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:59.785 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:00.043 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e9e3917-fc44-4b95-8bf6-1fad04d7347e -t 2000 00:12:00.300 [ 00:12:00.300 { 00:12:00.300 "name": "4e9e3917-fc44-4b95-8bf6-1fad04d7347e", 00:12:00.300 "aliases": [ 00:12:00.300 "lvs/lvol" 00:12:00.300 ], 00:12:00.300 "product_name": "Logical Volume", 00:12:00.300 "block_size": 4096, 00:12:00.300 "num_blocks": 38912, 00:12:00.300 "uuid": "4e9e3917-fc44-4b95-8bf6-1fad04d7347e", 00:12:00.300 "assigned_rate_limits": { 00:12:00.300 "rw_ios_per_sec": 0, 00:12:00.300 "rw_mbytes_per_sec": 0, 00:12:00.300 "r_mbytes_per_sec": 0, 00:12:00.300 "w_mbytes_per_sec": 0 00:12:00.300 }, 00:12:00.300 "claimed": false, 00:12:00.300 "zoned": false, 00:12:00.300 "supported_io_types": { 00:12:00.300 "read": true, 00:12:00.300 "write": true, 00:12:00.300 "unmap": true, 00:12:00.300 "flush": false, 00:12:00.300 "reset": true, 00:12:00.300 "nvme_admin": false, 00:12:00.300 "nvme_io": false, 00:12:00.300 "nvme_io_md": false, 00:12:00.300 "write_zeroes": true, 00:12:00.300 "zcopy": false, 00:12:00.300 "get_zone_info": false, 00:12:00.300 "zone_management": false, 00:12:00.300 "zone_append": false, 00:12:00.300 "compare": false, 00:12:00.300 "compare_and_write": false, 00:12:00.300 "abort": false, 00:12:00.300 "seek_hole": true, 00:12:00.300 "seek_data": true, 00:12:00.300 "copy": false, 00:12:00.300 "nvme_iov_md": false 00:12:00.300 }, 00:12:00.300 "driver_specific": { 00:12:00.300 "lvol": { 00:12:00.300 "lvol_store_uuid": "08c77a67-192f-4aec-874e-f0832a4b4afd", 00:12:00.300 "base_bdev": "aio_bdev", 00:12:00.300 "thin_provision": false, 00:12:00.300 "num_allocated_clusters": 38, 00:12:00.300 "snapshot": false, 00:12:00.300 "clone": false, 00:12:00.300 "esnap_clone": false 00:12:00.300 } 00:12:00.300 } 00:12:00.300 } 00:12:00.300 ] 00:12:00.300 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:00.300 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:12:00.300 03:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:00.557 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:00.557 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:12:00.557 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:00.814 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:00.814 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e9e3917-fc44-4b95-8bf6-1fad04d7347e 00:12:01.071 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08c77a67-192f-4aec-874e-f0832a4b4afd 00:12:01.329 03:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:01.587 03:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:02.154 00:12:02.154 real 0m20.834s 00:12:02.154 user 0m44.683s 00:12:02.154 sys 0m8.535s 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:02.154 ************************************ 00:12:02.154 END TEST lvs_grow_dirty 00:12:02.154 ************************************ 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:02.154 nvmf_trace.0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.154 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.412 rmmod nvme_tcp 00:12:02.412 rmmod nvme_fabrics 00:12:02.412 rmmod nvme_keyring 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68828 ']' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68828 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 68828 ']' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 68828 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68828 00:12:02.412 killing process with pid 68828 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68828' 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 68828 00:12:02.412 03:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 68828 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:03.364 00:12:03.364 real 0m43.329s 00:12:03.364 user 1m9.835s 00:12:03.364 sys 0m11.813s 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.364 03:01:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:03.364 ************************************ 00:12:03.364 END TEST nvmf_lvs_grow 00:12:03.364 ************************************ 00:12:03.631 03:01:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.631 03:01:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:03.631 03:01:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.631 03:01:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.631 03:01:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.631 ************************************ 00:12:03.631 START TEST nvmf_bdev_io_wait 00:12:03.631 ************************************ 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:03.631 * Looking for test storage... 00:12:03.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.631 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.632 03:01:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:03.632 Cannot find device "nvmf_tgt_br" 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.632 Cannot find device "nvmf_tgt_br2" 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:03.632 Cannot find device "nvmf_tgt_br" 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:03.632 Cannot find device "nvmf_tgt_br2" 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:03.632 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:03.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:03.891 00:12:03.891 --- 10.0.0.2 ping statistics --- 00:12:03.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.891 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:03.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:03.891 00:12:03.891 --- 10.0.0.3 ping statistics --- 00:12:03.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.891 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:03.891 00:12:03.891 --- 10.0.0.1 ping statistics --- 00:12:03.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.891 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69140 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69140 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 69140 ']' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.891 03:01:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:04.149 [2024-07-13 03:01:10.484698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:04.149 [2024-07-13 03:01:10.484859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.407 [2024-07-13 03:01:10.663340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.407 [2024-07-13 03:01:10.889047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.407 [2024-07-13 03:01:10.889106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.407 [2024-07-13 03:01:10.889123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.407 [2024-07-13 03:01:10.889137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.407 [2024-07-13 03:01:10.889150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.407 [2024-07-13 03:01:10.889375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.407 [2024-07-13 03:01:10.889551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.407 [2024-07-13 03:01:10.889948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.407 [2024-07-13 03:01:10.889963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.974 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.974 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:04.974 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.974 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.974 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 [2024-07-13 03:01:11.687527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 [2024-07-13 03:01:11.709379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.232 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.492 Malloc0 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:05.492 [2024-07-13 03:01:11.825057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69181 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69183 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69185 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.492 { 00:12:05.492 "params": { 00:12:05.492 "name": "Nvme$subsystem", 00:12:05.492 "trtype": "$TEST_TRANSPORT", 00:12:05.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.492 "adrfam": "ipv4", 00:12:05.492 "trsvcid": "$NVMF_PORT", 00:12:05.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.492 "hdgst": ${hdgst:-false}, 00:12:05.492 "ddgst": ${ddgst:-false} 00:12:05.492 }, 00:12:05.492 "method": "bdev_nvme_attach_controller" 00:12:05.492 } 00:12:05.492 EOF 00:12:05.492 )") 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:05.492 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.492 { 00:12:05.492 "params": { 00:12:05.492 "name": "Nvme$subsystem", 00:12:05.492 "trtype": "$TEST_TRANSPORT", 00:12:05.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.492 "adrfam": "ipv4", 00:12:05.492 "trsvcid": "$NVMF_PORT", 00:12:05.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.493 "hdgst": ${hdgst:-false}, 00:12:05.493 "ddgst": ${ddgst:-false} 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 } 00:12:05.493 EOF 00:12:05.493 )") 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.493 { 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme$subsystem", 00:12:05.493 "trtype": "$TEST_TRANSPORT", 00:12:05.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "$NVMF_PORT", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.493 "hdgst": ${hdgst:-false}, 00:12:05.493 "ddgst": ${ddgst:-false} 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 } 00:12:05.493 EOF 00:12:05.493 )") 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.493 { 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme$subsystem", 00:12:05.493 "trtype": "$TEST_TRANSPORT", 00:12:05.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "$NVMF_PORT", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.493 "hdgst": ${hdgst:-false}, 00:12:05.493 "ddgst": ${ddgst:-false} 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 } 00:12:05.493 EOF 00:12:05.493 )") 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69186 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme1", 00:12:05.493 "trtype": "tcp", 00:12:05.493 "traddr": "10.0.0.2", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "4420", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.493 "hdgst": false, 00:12:05.493 "ddgst": false 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 }' 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme1", 00:12:05.493 "trtype": "tcp", 00:12:05.493 "traddr": "10.0.0.2", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "4420", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.493 "hdgst": false, 00:12:05.493 "ddgst": false 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 }' 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme1", 00:12:05.493 "trtype": "tcp", 00:12:05.493 "traddr": "10.0.0.2", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "4420", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.493 "hdgst": false, 00:12:05.493 "ddgst": false 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 }' 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.493 "params": { 00:12:05.493 "name": "Nvme1", 00:12:05.493 "trtype": "tcp", 00:12:05.493 "traddr": "10.0.0.2", 00:12:05.493 "adrfam": "ipv4", 00:12:05.493 "trsvcid": "4420", 00:12:05.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.493 "hdgst": false, 00:12:05.493 "ddgst": false 00:12:05.493 }, 00:12:05.493 "method": "bdev_nvme_attach_controller" 00:12:05.493 }' 00:12:05.493 03:01:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69181 00:12:05.493 [2024-07-13 03:01:11.937457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:05.493 [2024-07-13 03:01:11.937847] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:05.493 [2024-07-13 03:01:11.945112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:05.493 [2024-07-13 03:01:11.945383] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:05.493 [2024-07-13 03:01:11.958019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:05.493 [2024-07-13 03:01:11.958213] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:05.493 [2024-07-13 03:01:11.958340] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:05.493 [2024-07-13 03:01:11.958746] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:05.751 [2024-07-13 03:01:12.148648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.751 [2024-07-13 03:01:12.189957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.751 [2024-07-13 03:01:12.240010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.009 [2024-07-13 03:01:12.296457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.009 [2024-07-13 03:01:12.349004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.009 [2024-07-13 03:01:12.448145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:06.009 [2024-07-13 03:01:12.459055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:06.267 [2024-07-13 03:01:12.508475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:06.267 [2024-07-13 03:01:12.532975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.267 [2024-07-13 03:01:12.636279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.267 [2024-07-13 03:01:12.651768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.267 [2024-07-13 03:01:12.691923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.267 Running I/O for 1 seconds... 00:12:06.525 Running I/O for 1 seconds... 00:12:06.525 Running I/O for 1 seconds... 00:12:06.525 Running I/O for 1 seconds... 00:12:07.467 00:12:07.467 Latency(us) 00:12:07.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.467 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:07.467 Nvme1n1 : 1.01 8968.82 35.03 0.00 0.00 14206.51 5183.30 23354.65 00:12:07.467 =================================================================================================================== 00:12:07.467 Total : 8968.82 35.03 0.00 0.00 14206.51 5183.30 23354.65 00:12:07.467 00:12:07.467 Latency(us) 00:12:07.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.467 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:07.467 Nvme1n1 : 1.01 7475.36 29.20 0.00 0.00 17029.13 9234.62 26214.40 00:12:07.467 =================================================================================================================== 00:12:07.467 Total : 7475.36 29.20 0.00 0.00 17029.13 9234.62 26214.40 00:12:07.467 00:12:07.467 Latency(us) 00:12:07.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.467 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:07.467 Nvme1n1 : 1.01 6456.64 25.22 0.00 0.00 19688.41 8757.99 28359.21 00:12:07.467 =================================================================================================================== 00:12:07.467 Total : 6456.64 25.22 0.00 0.00 19688.41 8757.99 28359.21 00:12:07.467 00:12:07.467 Latency(us) 00:12:07.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.467 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:07.467 Nvme1n1 : 1.00 142388.11 556.20 0.00 0.00 895.84 426.36 2800.17 00:12:07.467 =================================================================================================================== 00:12:07.467 Total : 142388.11 556.20 0.00 0.00 895.84 426.36 2800.17 00:12:08.400 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69183 00:12:08.400 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69185 00:12:08.400 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69186 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.658 rmmod nvme_tcp 00:12:08.658 rmmod nvme_fabrics 00:12:08.658 rmmod nvme_keyring 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69140 ']' 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69140 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 69140 ']' 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 69140 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:08.658 03:01:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69140 00:12:08.659 killing process with pid 69140 00:12:08.659 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:08.659 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:08.659 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69140' 00:12:08.659 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 69140 00:12:08.659 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 69140 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:09.595 ************************************ 00:12:09.595 END TEST nvmf_bdev_io_wait 00:12:09.595 ************************************ 00:12:09.595 00:12:09.595 real 0m6.093s 00:12:09.595 user 0m28.191s 00:12:09.595 sys 0m2.603s 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.595 03:01:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.595 03:01:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.595 03:01:16 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:09.595 03:01:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.595 03:01:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.595 03:01:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.595 ************************************ 00:12:09.595 START TEST nvmf_queue_depth 00:12:09.595 ************************************ 00:12:09.595 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:09.595 * Looking for test storage... 00:12:09.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.854 03:01:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:09.855 Cannot find device "nvmf_tgt_br" 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.855 Cannot find device "nvmf_tgt_br2" 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:09.855 Cannot find device "nvmf_tgt_br" 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:09.855 Cannot find device "nvmf_tgt_br2" 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:09.855 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:10.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:10.117 00:12:10.117 --- 10.0.0.2 ping statistics --- 00:12:10.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.117 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:10.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:10.117 00:12:10.117 --- 10.0.0.3 ping statistics --- 00:12:10.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.117 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:10.117 00:12:10.117 --- 10.0.0.1 ping statistics --- 00:12:10.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.117 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.117 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:10.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69438 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69438 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69438 ']' 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.118 03:01:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:10.118 [2024-07-13 03:01:16.580687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:10.118 [2024-07-13 03:01:16.581036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.377 [2024-07-13 03:01:16.759093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.635 [2024-07-13 03:01:16.985039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.635 [2024-07-13 03:01:16.985366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.635 [2024-07-13 03:01:16.985572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.635 [2024-07-13 03:01:16.985921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.635 [2024-07-13 03:01:16.986172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.635 [2024-07-13 03:01:16.986242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.894 [2024-07-13 03:01:17.157307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 [2024-07-13 03:01:17.538208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 Malloc0 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.154 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.154 [2024-07-13 03:01:17.646808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69480 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69480 /var/tmp/bdevperf.sock 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69480 ']' 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.412 03:01:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:11.412 [2024-07-13 03:01:17.767490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:11.412 [2024-07-13 03:01:17.768143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69480 ] 00:12:11.669 [2024-07-13 03:01:17.943820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.669 [2024-07-13 03:01:18.160291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.927 [2024-07-13 03:01:18.323168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:12.186 03:01:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.186 03:01:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:12.186 03:01:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:12.186 03:01:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.186 03:01:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:12.445 NVMe0n1 00:12:12.445 03:01:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.445 03:01:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:12.445 Running I/O for 10 seconds... 00:12:24.651 00:12:24.651 Latency(us) 00:12:24.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:24.651 Verification LBA range: start 0x0 length 0x4000 00:12:24.651 NVMe0n1 : 10.14 6142.89 24.00 0.00 0.00 165684.50 25499.46 109147.23 00:12:24.651 =================================================================================================================== 00:12:24.651 Total : 6142.89 24.00 0.00 0.00 165684.50 25499.46 109147.23 00:12:24.651 0 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69480 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69480 ']' 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69480 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69480 00:12:24.651 killing process with pid 69480 00:12:24.651 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.651 00:12:24.651 Latency(us) 00:12:24.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.651 =================================================================================================================== 00:12:24.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69480' 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69480 00:12:24.651 03:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69480 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.651 rmmod nvme_tcp 00:12:24.651 rmmod nvme_fabrics 00:12:24.651 rmmod nvme_keyring 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69438 ']' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69438 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69438 ']' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69438 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69438 00:12:24.651 killing process with pid 69438 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69438' 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69438 00:12:24.651 03:01:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69438 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:25.218 00:12:25.218 real 0m15.508s 00:12:25.218 user 0m26.238s 00:12:25.218 sys 0m2.324s 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.218 03:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:25.218 ************************************ 00:12:25.218 END TEST nvmf_queue_depth 00:12:25.218 ************************************ 00:12:25.218 03:01:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.218 03:01:31 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:25.218 03:01:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.218 03:01:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.218 03:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.218 ************************************ 00:12:25.218 START TEST nvmf_target_multipath 00:12:25.218 ************************************ 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:25.218 * Looking for test storage... 00:12:25.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.218 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.219 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:25.476 Cannot find device "nvmf_tgt_br" 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.476 Cannot find device "nvmf_tgt_br2" 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:25.476 Cannot find device "nvmf_tgt_br" 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:12:25.476 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:25.477 Cannot find device "nvmf_tgt_br2" 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.477 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.735 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:25.735 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:25.735 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.735 03:01:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:25.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:25.735 00:12:25.735 --- 10.0.0.2 ping statistics --- 00:12:25.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.735 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:25.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:25.735 00:12:25.735 --- 10.0.0.3 ping statistics --- 00:12:25.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.735 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:25.735 00:12:25.735 --- 10.0.0.1 ping statistics --- 00:12:25.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.735 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:25.735 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:25.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69815 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69815 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 69815 ']' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.736 03:01:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:25.736 [2024-07-13 03:01:32.196782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:25.736 [2024-07-13 03:01:32.196955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.002 [2024-07-13 03:01:32.371500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.267 [2024-07-13 03:01:32.612407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.267 [2024-07-13 03:01:32.612480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.267 [2024-07-13 03:01:32.612501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.267 [2024-07-13 03:01:32.612518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.267 [2024-07-13 03:01:32.612536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.267 [2024-07-13 03:01:32.612799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.267 [2024-07-13 03:01:32.613103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.267 [2024-07-13 03:01:32.613844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.267 [2024-07-13 03:01:32.613855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.524 [2024-07-13 03:01:32.806260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.781 03:01:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.039 [2024-07-13 03:01:33.414058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.039 03:01:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:27.297 Malloc0 00:12:27.297 03:01:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:27.555 03:01:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.813 03:01:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.071 [2024-07-13 03:01:34.427931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.071 03:01:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:28.329 [2024-07-13 03:01:34.704243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:28.329 03:01:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:28.587 03:01:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.487 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:30.745 03:01:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69904 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:30.745 03:01:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:30.745 [global] 00:12:30.745 thread=1 00:12:30.745 invalidate=1 00:12:30.745 rw=randrw 00:12:30.745 time_based=1 00:12:30.745 runtime=6 00:12:30.745 ioengine=libaio 00:12:30.745 direct=1 00:12:30.745 bs=4096 00:12:30.745 iodepth=128 00:12:30.745 norandommap=0 00:12:30.745 numjobs=1 00:12:30.745 00:12:30.745 verify_dump=1 00:12:30.745 verify_backlog=512 00:12:30.746 verify_state_save=0 00:12:30.746 do_verify=1 00:12:30.746 verify=crc32c-intel 00:12:30.746 [job0] 00:12:30.746 filename=/dev/nvme0n1 00:12:30.746 Could not set queue depth (nvme0n1) 00:12:30.746 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:30.746 fio-3.35 00:12:30.746 Starting 1 thread 00:12:31.678 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:31.937 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:32.196 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:32.454 03:01:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:32.712 03:01:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69904 00:12:36.898 00:12:36.898 job0: (groupid=0, jobs=1): err= 0: pid=69931: Sat Jul 13 03:01:43 2024 00:12:36.898 read: IOPS=8308, BW=32.5MiB/s (34.0MB/s)(195MiB/6002msec) 00:12:36.898 slat (usec): min=8, max=7236, avg=71.18, stdev=269.49 00:12:36.898 clat (usec): min=1567, max=18173, avg=10456.38, stdev=1649.18 00:12:36.898 lat (usec): min=1601, max=18184, avg=10527.56, stdev=1651.01 00:12:36.898 clat percentiles (usec): 00:12:36.898 | 1.00th=[ 5473], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[ 9634], 00:12:36.898 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:12:36.898 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11731], 95.00th=[14222], 00:12:36.898 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:12:36.898 | 99.99th=[17433] 00:12:36.898 bw ( KiB/s): min= 400, max=21868, per=54.91%, avg=18248.82, stdev=6327.25, samples=11 00:12:36.898 iops : min= 100, max= 5467, avg=4562.18, stdev=1581.80, samples=11 00:12:36.898 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(102MiB/5142msec); 0 zone resets 00:12:36.899 slat (usec): min=19, max=3045, avg=82.20, stdev=196.96 00:12:36.899 clat (usec): min=1727, max=17277, avg=9190.71, stdev=1481.70 00:12:36.899 lat (usec): min=1782, max=17306, avg=9272.91, stdev=1487.03 00:12:36.899 clat percentiles (usec): 00:12:36.899 | 1.00th=[ 4113], 5.00th=[ 5800], 10.00th=[ 7898], 20.00th=[ 8586], 00:12:36.899 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:12:36.899 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:12:36.899 | 99.00th=[13829], 99.50th=[14484], 99.90th=[16319], 99.95th=[16581], 00:12:36.899 | 99.99th=[16909] 00:12:36.899 bw ( KiB/s): min= 288, max=22091, per=89.78%, avg=18241.36, stdev=6323.58, samples=11 00:12:36.899 iops : min= 72, max= 5522, avg=4560.27, stdev=1580.85, samples=11 00:12:36.899 lat (msec) : 2=0.01%, 4=0.36%, 10=46.89%, 20=52.75% 00:12:36.899 cpu : usr=5.52%, sys=21.10%, ctx=4443, majf=0, minf=145 00:12:36.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:36.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.899 issued rwts: total=49868,26117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.899 00:12:36.899 Run status group 0 (all jobs): 00:12:36.899 READ: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=195MiB (204MB), run=6002-6002msec 00:12:36.899 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=102MiB (107MB), run=5142-5142msec 00:12:36.899 00:12:36.899 Disk stats (read/write): 00:12:36.899 nvme0n1: ios=48721/26117, merge=0/0, ticks=491031/226518, in_queue=717549, util=98.53% 00:12:36.899 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:12:37.157 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70005 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:37.415 03:01:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:37.415 [global] 00:12:37.415 thread=1 00:12:37.415 invalidate=1 00:12:37.415 rw=randrw 00:12:37.415 time_based=1 00:12:37.415 runtime=6 00:12:37.415 ioengine=libaio 00:12:37.415 direct=1 00:12:37.415 bs=4096 00:12:37.415 iodepth=128 00:12:37.415 norandommap=0 00:12:37.415 numjobs=1 00:12:37.415 00:12:37.415 verify_dump=1 00:12:37.415 verify_backlog=512 00:12:37.415 verify_state_save=0 00:12:37.415 do_verify=1 00:12:37.415 verify=crc32c-intel 00:12:37.415 [job0] 00:12:37.415 filename=/dev/nvme0n1 00:12:37.415 Could not set queue depth (nvme0n1) 00:12:37.673 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:37.673 fio-3.35 00:12:37.673 Starting 1 thread 00:12:38.606 03:01:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:38.873 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:39.143 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:39.410 03:01:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70005 00:12:44.674 00:12:44.674 job0: (groupid=0, jobs=1): err= 0: pid=70026: Sat Jul 13 03:01:50 2024 00:12:44.674 read: IOPS=9458, BW=36.9MiB/s (38.7MB/s)(222MiB/6008msec) 00:12:44.674 slat (usec): min=6, max=7507, avg=55.17, stdev=234.34 00:12:44.674 clat (usec): min=1125, max=18921, avg=9441.16, stdev=2340.62 00:12:44.674 lat (usec): min=1139, max=18932, avg=9496.32, stdev=2359.85 00:12:44.674 clat percentiles (usec): 00:12:44.674 | 1.00th=[ 3720], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 7373], 00:12:44.674 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10290], 00:12:44.674 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[12518], 00:12:44.674 | 99.00th=[15795], 99.50th=[16319], 99.90th=[16909], 99.95th=[17433], 00:12:44.674 | 99.99th=[18220] 00:12:44.674 bw ( KiB/s): min= 7744, max=38040, per=51.28%, avg=19401.33, stdev=7792.92, samples=12 00:12:44.674 iops : min= 1936, max= 9510, avg=4850.33, stdev=1948.23, samples=12 00:12:44.674 write: IOPS=5748, BW=22.5MiB/s (23.5MB/s)(114MiB/5093msec); 0 zone resets 00:12:44.674 slat (usec): min=13, max=2615, avg=64.13, stdev=167.87 00:12:44.674 clat (usec): min=1905, max=18101, avg=7722.33, stdev=2372.51 00:12:44.674 lat (usec): min=1961, max=18134, avg=7786.46, stdev=2395.04 00:12:44.674 clat percentiles (usec): 00:12:44.674 | 1.00th=[ 3163], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 5145], 00:12:44.674 | 30.00th=[ 5866], 40.00th=[ 6915], 50.00th=[ 8586], 60.00th=[ 9110], 00:12:44.674 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10552], 00:12:44.674 | 99.00th=[13304], 99.50th=[14222], 99.90th=[15664], 99.95th=[16450], 00:12:44.674 | 99.99th=[17957] 00:12:44.674 bw ( KiB/s): min= 8192, max=37528, per=84.73%, avg=19484.67, stdev=7613.28, samples=12 00:12:44.674 iops : min= 2048, max= 9382, avg=4871.17, stdev=1903.32, samples=12 00:12:44.674 lat (msec) : 2=0.05%, 4=2.75%, 10=58.76%, 20=38.44% 00:12:44.674 cpu : usr=5.19%, sys=22.12%, ctx=4907, majf=0, minf=108 00:12:44.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:44.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.674 issued rwts: total=56826,29278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.674 00:12:44.674 Run status group 0 (all jobs): 00:12:44.674 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=222MiB (233MB), run=6008-6008msec 00:12:44.674 WRITE: bw=22.5MiB/s (23.5MB/s), 22.5MiB/s-22.5MiB/s (23.5MB/s-23.5MB/s), io=114MiB (120MB), run=5093-5093msec 00:12:44.674 00:12:44.674 Disk stats (read/write): 00:12:44.674 nvme0n1: ios=56172/28672, merge=0/0, ticks=509150/207116, in_queue=716266, util=98.68% 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.674 rmmod nvme_tcp 00:12:44.674 rmmod nvme_fabrics 00:12:44.674 rmmod nvme_keyring 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:44.674 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69815 ']' 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69815 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 69815 ']' 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 69815 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69815 00:12:44.675 killing process with pid 69815 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69815' 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 69815 00:12:44.675 03:01:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 69815 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:45.241 ************************************ 00:12:45.241 END TEST nvmf_target_multipath 00:12:45.241 ************************************ 00:12:45.241 00:12:45.241 real 0m20.129s 00:12:45.241 user 1m13.376s 00:12:45.241 sys 0m9.735s 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.241 03:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:45.500 03:01:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.500 03:01:51 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:45.500 03:01:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.500 03:01:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.500 03:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.500 ************************************ 00:12:45.500 START TEST nvmf_zcopy 00:12:45.500 ************************************ 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:45.500 * Looking for test storage... 00:12:45.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:45.500 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:45.501 Cannot find device "nvmf_tgt_br" 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.501 Cannot find device "nvmf_tgt_br2" 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:45.501 Cannot find device "nvmf_tgt_br" 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:45.501 Cannot find device "nvmf_tgt_br2" 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:45.501 03:01:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:45.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:45.760 00:12:45.760 --- 10.0.0.2 ping statistics --- 00:12:45.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.760 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:45.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:45.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:45.760 00:12:45.760 --- 10.0.0.3 ping statistics --- 00:12:45.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.760 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:45.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:45.760 00:12:45.760 --- 10.0.0.1 ping statistics --- 00:12:45.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.760 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70279 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70279 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 70279 ']' 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.760 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.018 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.018 03:01:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:46.018 [2024-07-13 03:01:52.348859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:46.018 [2024-07-13 03:01:52.349065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.275 [2024-07-13 03:01:52.515746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.275 [2024-07-13 03:01:52.745779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.275 [2024-07-13 03:01:52.745858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.275 [2024-07-13 03:01:52.745881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.275 [2024-07-13 03:01:52.745923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.275 [2024-07-13 03:01:52.745939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.275 [2024-07-13 03:01:52.745988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.533 [2024-07-13 03:01:52.942127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 [2024-07-13 03:01:53.344207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.099 [2024-07-13 03:01:53.360369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.099 malloc0 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:47.099 { 00:12:47.099 "params": { 00:12:47.099 "name": "Nvme$subsystem", 00:12:47.099 "trtype": "$TEST_TRANSPORT", 00:12:47.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:47.099 "adrfam": "ipv4", 00:12:47.099 "trsvcid": "$NVMF_PORT", 00:12:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:47.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:47.099 "hdgst": ${hdgst:-false}, 00:12:47.099 "ddgst": ${ddgst:-false} 00:12:47.099 }, 00:12:47.099 "method": "bdev_nvme_attach_controller" 00:12:47.099 } 00:12:47.099 EOF 00:12:47.099 )") 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:47.099 03:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:47.099 "params": { 00:12:47.099 "name": "Nvme1", 00:12:47.099 "trtype": "tcp", 00:12:47.099 "traddr": "10.0.0.2", 00:12:47.099 "adrfam": "ipv4", 00:12:47.099 "trsvcid": "4420", 00:12:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:47.099 "hdgst": false, 00:12:47.099 "ddgst": false 00:12:47.099 }, 00:12:47.099 "method": "bdev_nvme_attach_controller" 00:12:47.099 }' 00:12:47.099 [2024-07-13 03:01:53.525267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:47.099 [2024-07-13 03:01:53.525560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:12:47.358 [2024-07-13 03:01:53.700457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.616 [2024-07-13 03:01:53.924302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.616 [2024-07-13 03:01:54.077457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.875 Running I/O for 10 seconds... 00:12:57.847 00:12:57.847 Latency(us) 00:12:57.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.847 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:57.847 Verification LBA range: start 0x0 length 0x1000 00:12:57.847 Nvme1n1 : 10.02 5474.85 42.77 0.00 0.00 23315.93 2844.86 30980.65 00:12:57.847 =================================================================================================================== 00:12:57.847 Total : 5474.85 42.77 0.00 0.00 23315.93 2844.86 30980.65 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70446 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:58.785 { 00:12:58.785 "params": { 00:12:58.785 "name": "Nvme$subsystem", 00:12:58.785 "trtype": "$TEST_TRANSPORT", 00:12:58.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.785 "adrfam": "ipv4", 00:12:58.785 "trsvcid": "$NVMF_PORT", 00:12:58.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.785 "hdgst": ${hdgst:-false}, 00:12:58.785 "ddgst": ${ddgst:-false} 00:12:58.785 }, 00:12:58.785 "method": "bdev_nvme_attach_controller" 00:12:58.785 } 00:12:58.785 EOF 00:12:58.785 )") 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:58.785 [2024-07-13 03:02:05.137628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.137719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:58.785 03:02:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:58.785 "params": { 00:12:58.785 "name": "Nvme1", 00:12:58.785 "trtype": "tcp", 00:12:58.785 "traddr": "10.0.0.2", 00:12:58.785 "adrfam": "ipv4", 00:12:58.785 "trsvcid": "4420", 00:12:58.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.785 "hdgst": false, 00:12:58.785 "ddgst": false 00:12:58.785 }, 00:12:58.785 "method": "bdev_nvme_attach_controller" 00:12:58.785 }' 00:12:58.785 [2024-07-13 03:02:05.149574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.149618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.161531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.161588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.173541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.173580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.185544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.185600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.197571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.197611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.209566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.209623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.221562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.221600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.233550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.233593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.237111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:58.785 [2024-07-13 03:02:05.237271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70446 ] 00:12:58.785 [2024-07-13 03:02:05.245581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.245619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.257563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.257621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:58.785 [2024-07-13 03:02:05.269575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:58.785 [2024-07-13 03:02:05.269612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.281642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.281702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.293580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.293619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.305618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.305667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.317614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.317652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.329607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.329662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.341620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.341657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.353596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.353651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.365616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.365653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.377624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.377678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.389613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.389649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.401636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.401692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.407321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.046 [2024-07-13 03:02:05.413660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.413703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.425632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.425689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.437659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.437697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.449651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.449711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.461686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.461728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.473689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.473773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.485694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.485776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.497755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.497839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.509753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.509805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.521685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.521767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.046 [2024-07-13 03:02:05.533745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.046 [2024-07-13 03:02:05.533811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.305 [2024-07-13 03:02:05.545775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.305 [2024-07-13 03:02:05.545830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.305 [2024-07-13 03:02:05.557768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.305 [2024-07-13 03:02:05.557819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.305 [2024-07-13 03:02:05.569712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.305 [2024-07-13 03:02:05.569792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.305 [2024-07-13 03:02:05.571560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.305 [2024-07-13 03:02:05.581697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.305 [2024-07-13 03:02:05.581760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.305 [2024-07-13 03:02:05.593797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.305 [2024-07-13 03:02:05.593864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.605799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.605835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.617790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.617845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.629805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.629852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.641813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.641863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.653822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.653874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.665779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.665833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.677751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.677802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.689794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.689848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.701779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.701814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.713784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.713821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.725802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.725838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.737824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.737878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.740984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.306 [2024-07-13 03:02:05.749909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.750001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.761854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.761949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.773851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.773927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.785852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.785933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.306 [2024-07-13 03:02:05.797856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.306 [2024-07-13 03:02:05.797935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.809940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.810049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.821883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.821979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.833870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.833977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.846079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.846139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.858110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.858150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.870148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.870209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.882114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.882154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.894113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.894169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.906138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.906179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 Running I/O for 5 seconds... 00:12:59.565 [2024-07-13 03:02:05.918295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.918334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.935948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.936010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.950664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.950705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.965201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.965260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.981223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.981280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:05.991861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:05.991963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:06.007571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:06.007611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:06.022350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:06.022410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:06.038699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:06.038771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.565 [2024-07-13 03:02:06.055633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.565 [2024-07-13 03:02:06.055693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.072468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.072506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.089718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.089771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.107580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.107617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.123030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.123071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.139479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.139516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.156849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.156915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.172078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.172115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.187718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.187780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.204043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.204085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.220693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.220751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.237887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.237967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.253072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.253115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.269028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.269068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.279763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.279822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.296367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.296406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:59.824 [2024-07-13 03:02:06.310987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:59.824 [2024-07-13 03:02:06.311045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.325697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.325768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.341924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.342012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.358463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.358503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.374666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.374726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.393361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.393442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.407742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.407810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.423685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.423726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.440838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.440910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.458279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.458320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.472444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.472502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.488073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.488115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.499033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.499091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.515078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.515118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.530643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.530703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.541155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.541196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.557327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.557392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.083 [2024-07-13 03:02:06.571757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.083 [2024-07-13 03:02:06.571814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.587722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.587780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.603247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.603303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.620278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.620338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.635814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.635871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.648516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.648573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.665373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.665456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.681200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.681264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.692226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.692283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.708055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.708114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.723842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.723910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.734431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.734474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.750694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.750734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.766394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.341 [2024-07-13 03:02:06.766452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.341 [2024-07-13 03:02:06.782952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.342 [2024-07-13 03:02:06.782993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.342 [2024-07-13 03:02:06.800135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.342 [2024-07-13 03:02:06.800196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.342 [2024-07-13 03:02:06.816533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.342 [2024-07-13 03:02:06.816573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.342 [2024-07-13 03:02:06.832371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.342 [2024-07-13 03:02:06.832437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.848214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.848271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.863668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.863727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.879742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.879784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.896947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.897024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.913092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.913149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.928238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.928319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.942878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.942974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.959031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.959091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.976636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.976678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:06.991568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:06.991628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.003689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.003730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.021166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.021225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.037754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.037799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.054110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.054170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.064068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.064108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.600 [2024-07-13 03:02:07.079860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.600 [2024-07-13 03:02:07.079945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.096309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.096351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.113783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.113857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.130487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.130527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.146829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.146888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.163249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.163305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.180409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.859 [2024-07-13 03:02:07.180471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.859 [2024-07-13 03:02:07.196589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.196633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.212306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.212365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.227548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.227589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.242931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.242976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.258714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.258755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.270323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.270382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.285531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.285590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.300863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.300963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.311837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.311877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.327994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.328039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.860 [2024-07-13 03:02:07.342667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:00.860 [2024-07-13 03:02:07.342707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.118 [2024-07-13 03:02:07.358041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.358081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.373661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.373719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.392527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.392578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.407255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.407311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.423771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.423816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.438792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.438831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.454562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.454602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.465586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.465628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.482057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.482096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.496159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.496200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.512042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.512082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.528382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.528422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.546547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.546592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.562489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.562529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.578345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.578385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.119 [2024-07-13 03:02:07.597461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.119 [2024-07-13 03:02:07.597504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.376 [2024-07-13 03:02:07.612214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.376 [2024-07-13 03:02:07.612270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.376 [2024-07-13 03:02:07.627662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.376 [2024-07-13 03:02:07.627702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.376 [2024-07-13 03:02:07.643469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.376 [2024-07-13 03:02:07.643511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.376 [2024-07-13 03:02:07.654688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.376 [2024-07-13 03:02:07.654730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.671931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.672005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.687670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.687710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.703986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.704025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.714228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.714284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.730132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.730179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.745182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.745223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.760694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.760735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.777926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.777974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.794539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.794579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.810987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.811028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.827835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.827881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.840298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.840341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.377 [2024-07-13 03:02:07.858920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.377 [2024-07-13 03:02:07.858977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.873082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.873131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.889666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.889749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.905007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.905051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.920849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.920919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.931863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.931949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.947138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.947181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.962779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.962821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.974850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.974937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:07.991429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:07.991475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.006966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.007019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.018476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.018520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.036479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.036525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.052563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.052650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.068078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.068121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.083178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.083222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.098689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.098732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.635 [2024-07-13 03:02:08.115236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.635 [2024-07-13 03:02:08.115294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.132238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.132305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.144426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.144468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.161100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.161142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.176098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.176139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.193175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.193217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.208023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.208067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.223617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.223659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.238830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.238912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.254735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.254778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.265670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.265713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.281650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.281694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.297734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.297777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.314041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.314085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.327248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.327323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.345504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.345553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.362320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.362377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.894 [2024-07-13 03:02:08.377101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:01.894 [2024-07-13 03:02:08.377156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.393567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.393626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.411394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.411467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.426815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.426863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.443562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.443605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.459353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.459395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.470648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.470690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.487581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.487623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.503518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.503559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.519802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.519869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.537136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.537178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.554012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.554053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.570227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.570270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.587870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.587945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.603831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.603876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.615735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.615780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.153 [2024-07-13 03:02:08.631841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.153 [2024-07-13 03:02:08.631926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.648442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.648514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.664502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.664545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.677037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.677081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.690344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.690388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.704166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.704210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.719550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.719596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.735554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.735596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.752194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.752238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.770117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.770159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.785831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.785873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.796878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.796964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.812282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.812350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.827733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.827796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.839407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.839448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.855124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.855164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.870569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.870628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.883440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.883505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.413 [2024-07-13 03:02:08.902213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.413 [2024-07-13 03:02:08.902271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.917188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.917230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.933406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.933465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.951238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.951282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.965618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.965663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.983109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.983184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:08.997938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:08.998009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.010874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.010960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.028191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.028234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.042834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.042876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.059368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.059410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.076003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.076045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.092563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.092604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.108062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.108103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.124737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.124779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.141873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.141943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.671 [2024-07-13 03:02:09.154390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.671 [2024-07-13 03:02:09.154435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.173044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.173087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.189223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.189278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.204464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.204507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.220817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.220860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.238103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.238146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.253436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.253484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.264530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.264574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.280343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.280384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.296728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.296769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.312423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.312464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.327638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.327695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.343888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.343990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.360021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.360063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.370794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.370846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.386161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.386202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.400755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.400797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.929 [2024-07-13 03:02:09.415577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:02.929 [2024-07-13 03:02:09.415619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.431752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.431794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.448784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.448826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.464847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.464915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.480869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.480985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.497200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.497244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.514069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.514112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.530068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.530110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.542489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.542530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.560477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.560519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.576230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.576273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.592525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.592566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.604423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.604465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.621042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.621083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.635885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.635987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.652242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.193 [2024-07-13 03:02:09.652283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.193 [2024-07-13 03:02:09.669937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.194 [2024-07-13 03:02:09.670008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.687283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.687326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.703402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.703442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.719586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.719629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.734479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.734521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.750260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.750303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.765598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.765643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.781025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.781067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.791738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.455 [2024-07-13 03:02:09.791779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.455 [2024-07-13 03:02:09.807153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.807194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.823889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.823960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.841496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.841540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.856796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.856854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.873681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.873756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.890424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.890467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.903061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.903106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.921102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.921144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.456 [2024-07-13 03:02:09.936112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.456 [2024-07-13 03:02:09.936155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.715 [2024-07-13 03:02:09.953353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.715 [2024-07-13 03:02:09.953422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.715 [2024-07-13 03:02:09.969523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.715 [2024-07-13 03:02:09.969568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.715 [2024-07-13 03:02:09.986587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:09.986629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.003207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.003253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.016541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.016618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.034625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.034670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.050187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.050229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.065605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.065652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.077685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.077759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.094448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.094490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.110082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.110125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.120969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.121011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.136827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.136870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.151646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.151703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.168282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.168325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.184984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.185026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.716 [2024-07-13 03:02:10.202208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.716 [2024-07-13 03:02:10.202249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.216794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.216835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.232361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.232403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.248656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.248698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.265459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.265506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.282483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.282528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.299023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.299065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.975 [2024-07-13 03:02:10.310739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.975 [2024-07-13 03:02:10.310782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.327127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.327168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.342518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.342561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.355288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.355331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.373844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.373925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.388998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.389040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.404607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.404665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.420085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.420125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.436027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.436069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.976 [2024-07-13 03:02:10.453403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.976 [2024-07-13 03:02:10.453447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.468847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.468936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.481339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.481406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.499514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.499556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.514200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.514257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.530404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.530445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.547715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.547758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.563011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.563052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.578812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.578853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.589956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.590012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.607317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.607373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.623389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.623430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.640173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.640216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.655180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.655223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.670171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.670214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.685220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.685263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.700800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.700842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.235 [2024-07-13 03:02:10.717104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.235 [2024-07-13 03:02:10.717149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.729766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.729828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.747718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.747759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.765047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.765090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.779564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.779606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.795039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.795079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.806019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.806060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.821868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.821957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.836562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.836604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.851838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.851880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.867658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.867701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.884715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.884757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.901684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.901753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.916742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.916784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.928499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.928547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 00:13:04.494 Latency(us) 00:13:04.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.494 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:04.494 Nvme1n1 : 5.01 9894.35 77.30 0.00 0.00 12919.40 5034.36 22639.71 00:13:04.494 =================================================================================================================== 00:13:04.494 Total : 9894.35 77.30 0.00 0.00 12919.40 5034.36 22639.71 00:13:04.494 [2024-07-13 03:02:10.940481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.940719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.952505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.952716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.964497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.964706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.494 [2024-07-13 03:02:10.976549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.494 [2024-07-13 03:02:10.976866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:10.988536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:10.988745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.000487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.000706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.012503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.012685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.024497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.024678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.036566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.036932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.048522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.048704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.060504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.060684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.072517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.072694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.084518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.084699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.096512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.096707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.108522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.108701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.120530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.120708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.132520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.132699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.144531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.144711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.156613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.156938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.168539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.168720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.180547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.180727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.192560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.192761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.204669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.205076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.216567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.216773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.228537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.228718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.753 [2024-07-13 03:02:11.240556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.753 [2024-07-13 03:02:11.240739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.252566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.252608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.264552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.264597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.276564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.276602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.288560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.288596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.300653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.300714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.312634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.312682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.324576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.324612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.336608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.336645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.348602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.348639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.360583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.360619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.372603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.372640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.384594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.384630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.396658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.396696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.408687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.408727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.420611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.420664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.432642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.432678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.444636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.444673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.456619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.456655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.468640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.468676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.480671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.480709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.011 [2024-07-13 03:02:11.492650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.011 [2024-07-13 03:02:11.492688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.504758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.504804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.516656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.516697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.528761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.528819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.540672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.540711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.552655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.552692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.564673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.564710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.576679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.576716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.588729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.588771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.600715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.600756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.612685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.612722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.624728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.624766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.636718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.636756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.648732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.648768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.660727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.660763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.672720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.672755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.692745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.692782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.704795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.704846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.716774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.716811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.728781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.728818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.740769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.740805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.752763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.752817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.272 [2024-07-13 03:02:11.760794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.272 [2024-07-13 03:02:11.760836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.772801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.772859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.784791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.784829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.796857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.796965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.808790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.808828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.820826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.820963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.832864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.832964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.844802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.844840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.856824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.856862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.868809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.868861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.880832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.880869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.892846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.892910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.904828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.904865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.916855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.916934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.928852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.928914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.940848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.940928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.952930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.952968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 [2024-07-13 03:02:11.964842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.530 [2024-07-13 03:02:11.964878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.530 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70446) - No such process 00:13:05.530 03:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70446 00:13:05.530 03:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.530 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.530 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.531 delay0 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.531 03:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:05.788 [2024-07-13 03:02:12.192866] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:12.343 Initializing NVMe Controllers 00:13:12.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.343 Initialization complete. Launching workers. 00:13:12.343 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 92 00:13:12.343 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 379, failed to submit 33 00:13:12.343 success 278, unsuccess 101, failed 0 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.343 rmmod nvme_tcp 00:13:12.343 rmmod nvme_fabrics 00:13:12.343 rmmod nvme_keyring 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70279 ']' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70279 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 70279 ']' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 70279 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70279 00:13:12.343 killing process with pid 70279 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70279' 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 70279 00:13:12.343 03:02:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 70279 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:13.276 ************************************ 00:13:13.276 END TEST nvmf_zcopy 00:13:13.276 ************************************ 00:13:13.276 00:13:13.276 real 0m27.692s 00:13:13.276 user 0m45.837s 00:13:13.276 sys 0m6.935s 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.276 03:02:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:13.276 03:02:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.276 03:02:19 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:13.276 03:02:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.276 03:02:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.276 03:02:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.276 ************************************ 00:13:13.276 START TEST nvmf_nmic 00:13:13.276 ************************************ 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:13.276 * Looking for test storage... 00:13:13.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.276 03:02:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:13.277 Cannot find device "nvmf_tgt_br" 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.277 Cannot find device "nvmf_tgt_br2" 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:13.277 Cannot find device "nvmf_tgt_br" 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:13.277 Cannot find device "nvmf_tgt_br2" 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.277 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:13.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:13.535 00:13:13.535 --- 10.0.0.2 ping statistics --- 00:13:13.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.535 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:13.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:13.535 00:13:13.535 --- 10.0.0.3 ping statistics --- 00:13:13.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.535 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:13.535 00:13:13.535 --- 10.0.0.1 ping statistics --- 00:13:13.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.535 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70790 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70790 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 70790 ']' 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.535 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.536 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.536 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.536 03:02:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.793 [2024-07-13 03:02:20.065240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:13.794 [2024-07-13 03:02:20.065416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.794 [2024-07-13 03:02:20.229394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.052 [2024-07-13 03:02:20.465843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.052 [2024-07-13 03:02:20.466197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.052 [2024-07-13 03:02:20.466387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.052 [2024-07-13 03:02:20.466417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.052 [2024-07-13 03:02:20.466437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.052 [2024-07-13 03:02:20.466625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.052 [2024-07-13 03:02:20.466766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.052 [2024-07-13 03:02:20.467170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.052 [2024-07-13 03:02:20.467170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.310 [2024-07-13 03:02:20.656230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.567 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 [2024-07-13 03:02:21.057969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 Malloc0 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 [2024-07-13 03:02:21.182415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.825 test case1: single bdev can't be used in multiple subsystems 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 [2024-07-13 03:02:21.206185] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:14.825 [2024-07-13 03:02:21.206259] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:14.825 [2024-07-13 03:02:21.206279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.825 request: 00:13:14.825 { 00:13:14.825 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:14.825 "namespace": { 00:13:14.825 "bdev_name": "Malloc0", 00:13:14.825 "no_auto_visible": false 00:13:14.825 }, 00:13:14.825 "method": "nvmf_subsystem_add_ns", 00:13:14.825 "req_id": 1 00:13:14.825 } 00:13:14.825 Got JSON-RPC error response 00:13:14.825 response: 00:13:14.825 { 00:13:14.825 "code": -32602, 00:13:14.825 "message": "Invalid parameters" 00:13:14.825 } 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:14.825 Adding namespace failed - expected result. 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:14.825 test case2: host connect to nvmf target in multiple paths 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:14.825 [2024-07-13 03:02:21.218400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.825 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.083 03:02:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:17.619 03:02:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:17.619 [global] 00:13:17.619 thread=1 00:13:17.619 invalidate=1 00:13:17.619 rw=write 00:13:17.619 time_based=1 00:13:17.619 runtime=1 00:13:17.619 ioengine=libaio 00:13:17.619 direct=1 00:13:17.619 bs=4096 00:13:17.619 iodepth=1 00:13:17.619 norandommap=0 00:13:17.619 numjobs=1 00:13:17.619 00:13:17.619 verify_dump=1 00:13:17.619 verify_backlog=512 00:13:17.619 verify_state_save=0 00:13:17.619 do_verify=1 00:13:17.619 verify=crc32c-intel 00:13:17.619 [job0] 00:13:17.619 filename=/dev/nvme0n1 00:13:17.619 Could not set queue depth (nvme0n1) 00:13:17.619 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:17.619 fio-3.35 00:13:17.619 Starting 1 thread 00:13:18.561 00:13:18.561 job0: (groupid=0, jobs=1): err= 0: pid=70876: Sat Jul 13 03:02:24 2024 00:13:18.561 read: IOPS=2274, BW=9099KiB/s (9317kB/s)(9108KiB/1001msec) 00:13:18.561 slat (nsec): min=12733, max=61906, avg=16303.66, stdev=4977.00 00:13:18.561 clat (usec): min=176, max=3224, avg=222.45, stdev=69.69 00:13:18.561 lat (usec): min=192, max=3247, avg=238.75, stdev=70.35 00:13:18.561 clat percentiles (usec): 00:13:18.561 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:13:18.561 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:18.561 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 262], 00:13:18.561 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 652], 99.95th=[ 898], 00:13:18.561 | 99.99th=[ 3228] 00:13:18.561 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:18.561 slat (usec): min=18, max=109, avg=25.28, stdev= 7.72 00:13:18.561 clat (usec): min=110, max=7982, avg=149.43, stdev=236.06 00:13:18.561 lat (usec): min=132, max=8002, avg=174.71, stdev=236.31 00:13:18.561 clat percentiles (usec): 00:13:18.561 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:13:18.561 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 141], 00:13:18.561 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 182], 00:13:18.561 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 4883], 99.95th=[ 7111], 00:13:18.561 | 99.99th=[ 7963] 00:13:18.561 bw ( KiB/s): min=10456, max=10456, per=100.00%, avg=10456.00, stdev= 0.00, samples=1 00:13:18.561 iops : min= 2614, max= 2614, avg=2614.00, stdev= 0.00, samples=1 00:13:18.561 lat (usec) : 250=95.18%, 500=4.67%, 750=0.02%, 1000=0.02% 00:13:18.561 lat (msec) : 4=0.04%, 10=0.06% 00:13:18.561 cpu : usr=2.20%, sys=7.80%, ctx=4837, majf=0, minf=2 00:13:18.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:18.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.561 issued rwts: total=2277,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:18.561 00:13:18.561 Run status group 0 (all jobs): 00:13:18.561 READ: bw=9099KiB/s (9317kB/s), 9099KiB/s-9099KiB/s (9317kB/s-9317kB/s), io=9108KiB (9327kB), run=1001-1001msec 00:13:18.561 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:18.561 00:13:18.561 Disk stats (read/write): 00:13:18.561 nvme0n1: ios=2098/2280, merge=0/0, ticks=506/374, in_queue=880, util=90.58% 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.561 rmmod nvme_tcp 00:13:18.561 rmmod nvme_fabrics 00:13:18.561 rmmod nvme_keyring 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70790 ']' 00:13:18.561 03:02:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70790 00:13:18.562 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 70790 ']' 00:13:18.562 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 70790 00:13:18.562 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:18.562 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.562 03:02:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70790 00:13:18.562 killing process with pid 70790 00:13:18.562 03:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.562 03:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.562 03:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70790' 00:13:18.562 03:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 70790 00:13:18.562 03:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 70790 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:20.092 00:13:20.092 real 0m6.728s 00:13:20.092 user 0m20.533s 00:13:20.092 sys 0m2.377s 00:13:20.092 ************************************ 00:13:20.092 END TEST nvmf_nmic 00:13:20.092 ************************************ 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.092 03:02:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:20.092 03:02:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:20.092 03:02:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:20.092 03:02:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.092 03:02:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.092 03:02:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.092 ************************************ 00:13:20.092 START TEST nvmf_fio_target 00:13:20.092 ************************************ 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:20.092 * Looking for test storage... 00:13:20.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:20.092 Cannot find device "nvmf_tgt_br" 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.092 Cannot find device "nvmf_tgt_br2" 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:20.092 Cannot find device "nvmf_tgt_br" 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:20.092 Cannot find device "nvmf_tgt_br2" 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.092 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:20.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:20.350 00:13:20.350 --- 10.0.0.2 ping statistics --- 00:13:20.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.350 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:20.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:20.350 00:13:20.350 --- 10.0.0.3 ping statistics --- 00:13:20.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.350 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:13:20.350 00:13:20.350 --- 10.0.0.1 ping statistics --- 00:13:20.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.350 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.350 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71065 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71065 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 71065 ']' 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.351 03:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.609 [2024-07-13 03:02:26.869583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:20.609 [2024-07-13 03:02:26.870026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.609 [2024-07-13 03:02:27.042130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.872 [2024-07-13 03:02:27.203630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.872 [2024-07-13 03:02:27.203962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.872 [2024-07-13 03:02:27.204102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.872 [2024-07-13 03:02:27.204220] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.872 [2024-07-13 03:02:27.204242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.872 [2024-07-13 03:02:27.204458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.872 [2024-07-13 03:02:27.204554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.872 [2024-07-13 03:02:27.204707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.872 [2024-07-13 03:02:27.205089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.133 [2024-07-13 03:02:27.383454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.391 03:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:21.650 [2024-07-13 03:02:28.051344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.650 03:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.216 03:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:22.216 03:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.474 03:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:22.474 03:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.732 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:22.732 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.989 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:22.989 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:23.247 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.505 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:23.505 03:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.762 03:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:23.762 03:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.329 03:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:24.329 03:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:24.329 03:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.587 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.587 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.844 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.844 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.102 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.360 [2024-07-13 03:02:31.659298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.360 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:25.618 03:02:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:25.875 03:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:28.409 03:02:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:28.409 [global] 00:13:28.409 thread=1 00:13:28.409 invalidate=1 00:13:28.409 rw=write 00:13:28.409 time_based=1 00:13:28.409 runtime=1 00:13:28.409 ioengine=libaio 00:13:28.409 direct=1 00:13:28.409 bs=4096 00:13:28.409 iodepth=1 00:13:28.409 norandommap=0 00:13:28.409 numjobs=1 00:13:28.409 00:13:28.409 verify_dump=1 00:13:28.409 verify_backlog=512 00:13:28.409 verify_state_save=0 00:13:28.409 do_verify=1 00:13:28.409 verify=crc32c-intel 00:13:28.409 [job0] 00:13:28.409 filename=/dev/nvme0n1 00:13:28.409 [job1] 00:13:28.409 filename=/dev/nvme0n2 00:13:28.409 [job2] 00:13:28.409 filename=/dev/nvme0n3 00:13:28.409 [job3] 00:13:28.409 filename=/dev/nvme0n4 00:13:28.409 Could not set queue depth (nvme0n1) 00:13:28.409 Could not set queue depth (nvme0n2) 00:13:28.409 Could not set queue depth (nvme0n3) 00:13:28.409 Could not set queue depth (nvme0n4) 00:13:28.409 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.409 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.409 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.409 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:28.409 fio-3.35 00:13:28.409 Starting 4 threads 00:13:29.345 00:13:29.345 job0: (groupid=0, jobs=1): err= 0: pid=71254: Sat Jul 13 03:02:35 2024 00:13:29.345 read: IOPS=1505, BW=6022KiB/s (6167kB/s)(6028KiB/1001msec) 00:13:29.345 slat (usec): min=16, max=525, avg=21.11, stdev=13.59 00:13:29.345 clat (usec): min=4, max=2450, avg=332.20, stdev=77.22 00:13:29.345 lat (usec): min=234, max=2480, avg=353.31, stdev=77.58 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:13:29.345 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:13:29.345 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 351], 95.00th=[ 363], 00:13:29.345 | 99.00th=[ 494], 99.50th=[ 562], 99.90th=[ 1860], 99.95th=[ 2442], 00:13:29.345 | 99.99th=[ 2442] 00:13:29.345 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:29.345 slat (nsec): min=25574, max=82157, avg=36134.35, stdev=7124.21 00:13:29.345 clat (usec): min=135, max=661, avg=263.05, stdev=46.12 00:13:29.345 lat (usec): min=164, max=703, avg=299.19, stdev=49.40 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 153], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 241], 00:13:29.345 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:13:29.345 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 379], 00:13:29.345 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 619], 99.95th=[ 660], 00:13:29.345 | 99.99th=[ 660] 00:13:29.345 bw ( KiB/s): min= 8192, max= 8192, per=24.46%, avg=8192.00, stdev= 0.00, samples=1 00:13:29.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:29.345 lat (usec) : 10=0.03%, 250=18.80%, 500=80.61%, 750=0.46%, 1000=0.03% 00:13:29.345 lat (msec) : 2=0.03%, 4=0.03% 00:13:29.345 cpu : usr=2.40%, sys=6.10%, ctx=3043, majf=0, minf=12 00:13:29.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 issued rwts: total=1507,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.345 job1: (groupid=0, jobs=1): err= 0: pid=71255: Sat Jul 13 03:02:35 2024 00:13:29.345 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:29.345 slat (nsec): min=13567, max=46950, avg=16299.92, stdev=3124.13 00:13:29.345 clat (usec): min=163, max=532, avg=192.06, stdev=18.55 00:13:29.345 lat (usec): min=177, max=547, avg=208.36, stdev=19.08 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:13:29.345 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:13:29.345 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 219], 00:13:29.345 | 99.00th=[ 239], 99.50th=[ 289], 99.90th=[ 412], 99.95th=[ 449], 00:13:29.345 | 99.99th=[ 537] 00:13:29.345 write: IOPS=2708, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:13:29.345 slat (nsec): min=15657, max=97193, avg=23904.99, stdev=5547.28 00:13:29.345 clat (usec): min=110, max=334, avg=144.66, stdev=16.33 00:13:29.345 lat (usec): min=133, max=356, avg=168.57, stdev=17.59 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 133], 00:13:29.345 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:13:29.345 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 172], 00:13:29.345 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 269], 99.95th=[ 273], 00:13:29.345 | 99.99th=[ 334] 00:13:29.345 bw ( KiB/s): min=12288, max=12288, per=36.70%, avg=12288.00, stdev= 0.00, samples=1 00:13:29.345 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:29.345 lat (usec) : 250=99.54%, 500=0.44%, 750=0.02% 00:13:29.345 cpu : usr=2.20%, sys=8.40%, ctx=5273, majf=0, minf=9 00:13:29.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 issued rwts: total=2560,2711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.345 job2: (groupid=0, jobs=1): err= 0: pid=71257: Sat Jul 13 03:02:35 2024 00:13:29.345 read: IOPS=2370, BW=9483KiB/s (9710kB/s)(9492KiB/1001msec) 00:13:29.345 slat (nsec): min=12753, max=44362, avg=15192.11, stdev=2827.07 00:13:29.345 clat (usec): min=175, max=1631, avg=208.53, stdev=35.11 00:13:29.345 lat (usec): min=189, max=1646, avg=223.72, stdev=35.32 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:13:29.345 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:13:29.345 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:13:29.345 | 99.00th=[ 258], 99.50th=[ 322], 99.90th=[ 355], 99.95th=[ 627], 00:13:29.345 | 99.99th=[ 1631] 00:13:29.345 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:29.345 slat (nsec): min=15617, max=79210, avg=23190.49, stdev=5026.11 00:13:29.345 clat (usec): min=127, max=1970, avg=156.62, stdev=40.30 00:13:29.345 lat (usec): min=147, max=1994, avg=179.81, stdev=41.04 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:13:29.345 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:13:29.345 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 188], 00:13:29.345 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 269], 99.95th=[ 529], 00:13:29.345 | 99.99th=[ 1975] 00:13:29.345 bw ( KiB/s): min=11552, max=11552, per=34.50%, avg=11552.00, stdev= 0.00, samples=1 00:13:29.345 iops : min= 2888, max= 2888, avg=2888.00, stdev= 0.00, samples=1 00:13:29.345 lat (usec) : 250=99.07%, 500=0.85%, 750=0.04% 00:13:29.345 lat (msec) : 2=0.04% 00:13:29.345 cpu : usr=2.20%, sys=7.40%, ctx=4934, majf=0, minf=3 00:13:29.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.345 issued rwts: total=2373,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.345 job3: (groupid=0, jobs=1): err= 0: pid=71258: Sat Jul 13 03:02:35 2024 00:13:29.345 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:29.345 slat (nsec): min=16254, max=63630, avg=21442.61, stdev=4002.08 00:13:29.345 clat (usec): min=196, max=1068, avg=326.68, stdev=39.20 00:13:29.345 lat (usec): min=224, max=1091, avg=348.12, stdev=39.55 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 221], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:13:29.345 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:13:29.345 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 359], 00:13:29.345 | 99.00th=[ 441], 99.50th=[ 578], 99.90th=[ 791], 99.95th=[ 1074], 00:13:29.345 | 99.99th=[ 1074] 00:13:29.345 write: IOPS=1571, BW=6286KiB/s (6437kB/s)(6292KiB/1001msec); 0 zone resets 00:13:29.345 slat (usec): min=27, max=123, avg=36.55, stdev= 7.43 00:13:29.345 clat (usec): min=145, max=512, avg=253.92, stdev=35.37 00:13:29.345 lat (usec): min=180, max=563, avg=290.47, stdev=37.46 00:13:29.345 clat percentiles (usec): 00:13:29.345 | 1.00th=[ 155], 5.00th=[ 212], 10.00th=[ 229], 20.00th=[ 239], 00:13:29.345 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:13:29.345 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:13:29.345 | 99.00th=[ 420], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 515], 00:13:29.346 | 99.99th=[ 515] 00:13:29.346 bw ( KiB/s): min= 8192, max= 8192, per=24.46%, avg=8192.00, stdev= 0.00, samples=1 00:13:29.346 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:29.346 lat (usec) : 250=22.61%, 500=76.94%, 750=0.39%, 1000=0.03% 00:13:29.346 lat (msec) : 2=0.03% 00:13:29.346 cpu : usr=1.70%, sys=7.20%, ctx=3109, majf=0, minf=11 00:13:29.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.346 issued rwts: total=1536,1573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.346 00:13:29.346 Run status group 0 (all jobs): 00:13:29.346 READ: bw=31.1MiB/s (32.6MB/s), 6022KiB/s-9.99MiB/s (6167kB/s-10.5MB/s), io=31.2MiB (32.7MB), run=1001-1001msec 00:13:29.346 WRITE: bw=32.7MiB/s (34.3MB/s), 6138KiB/s-10.6MiB/s (6285kB/s-11.1MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:13:29.346 00:13:29.346 Disk stats (read/write): 00:13:29.346 nvme0n1: ios=1175/1536, merge=0/0, ticks=408/413, in_queue=821, util=87.58% 00:13:29.346 nvme0n2: ios=2089/2526, merge=0/0, ticks=406/385, in_queue=791, util=88.25% 00:13:29.346 nvme0n3: ios=2069/2189, merge=0/0, ticks=474/358, in_queue=832, util=89.77% 00:13:29.346 nvme0n4: ios=1173/1536, merge=0/0, ticks=400/402, in_queue=802, util=89.81% 00:13:29.346 03:02:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:29.346 [global] 00:13:29.346 thread=1 00:13:29.346 invalidate=1 00:13:29.346 rw=randwrite 00:13:29.346 time_based=1 00:13:29.346 runtime=1 00:13:29.346 ioengine=libaio 00:13:29.346 direct=1 00:13:29.346 bs=4096 00:13:29.346 iodepth=1 00:13:29.346 norandommap=0 00:13:29.346 numjobs=1 00:13:29.346 00:13:29.346 verify_dump=1 00:13:29.346 verify_backlog=512 00:13:29.346 verify_state_save=0 00:13:29.346 do_verify=1 00:13:29.346 verify=crc32c-intel 00:13:29.346 [job0] 00:13:29.346 filename=/dev/nvme0n1 00:13:29.346 [job1] 00:13:29.346 filename=/dev/nvme0n2 00:13:29.346 [job2] 00:13:29.346 filename=/dev/nvme0n3 00:13:29.346 [job3] 00:13:29.346 filename=/dev/nvme0n4 00:13:29.346 Could not set queue depth (nvme0n1) 00:13:29.346 Could not set queue depth (nvme0n2) 00:13:29.346 Could not set queue depth (nvme0n3) 00:13:29.346 Could not set queue depth (nvme0n4) 00:13:29.604 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.604 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.604 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.604 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.604 fio-3.35 00:13:29.604 Starting 4 threads 00:13:30.987 00:13:30.987 job0: (groupid=0, jobs=1): err= 0: pid=71311: Sat Jul 13 03:02:37 2024 00:13:30.987 read: IOPS=2455, BW=9822KiB/s (10.1MB/s)(9832KiB/1001msec) 00:13:30.987 slat (nsec): min=12954, max=44985, avg=17228.66, stdev=3638.98 00:13:30.987 clat (usec): min=163, max=625, avg=201.97, stdev=22.91 00:13:30.987 lat (usec): min=177, max=652, avg=219.20, stdev=23.15 00:13:30.987 clat percentiles (usec): 00:13:30.987 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:13:30.987 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:13:30.987 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:13:30.987 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 347], 99.95th=[ 351], 00:13:30.987 | 99.99th=[ 627] 00:13:30.987 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:30.987 slat (usec): min=15, max=102, avg=25.02, stdev= 6.30 00:13:30.987 clat (usec): min=111, max=2049, avg=151.29, stdev=56.80 00:13:30.987 lat (usec): min=130, max=2073, avg=176.31, stdev=57.25 00:13:30.987 clat percentiles (usec): 00:13:30.987 | 1.00th=[ 117], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 139], 00:13:30.987 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:13:30.987 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:13:30.987 | 99.00th=[ 192], 99.50th=[ 245], 99.90th=[ 652], 99.95th=[ 1975], 00:13:30.987 | 99.99th=[ 2057] 00:13:30.987 bw ( KiB/s): min=12000, max=12000, per=31.01%, avg=12000.00, stdev= 0.00, samples=1 00:13:30.987 iops : min= 3000, max= 3000, avg=3000.00, stdev= 0.00, samples=1 00:13:30.987 lat (usec) : 250=98.55%, 500=1.36%, 750=0.06% 00:13:30.987 lat (msec) : 2=0.02%, 4=0.02% 00:13:30.987 cpu : usr=2.00%, sys=8.50%, ctx=5024, majf=0, minf=15 00:13:30.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.987 issued rwts: total=2458,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.987 job1: (groupid=0, jobs=1): err= 0: pid=71312: Sat Jul 13 03:02:37 2024 00:13:30.987 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:30.987 slat (nsec): min=13087, max=77907, avg=16644.60, stdev=4256.95 00:13:30.987 clat (usec): min=163, max=6437, avg=240.31, stdev=159.27 00:13:30.987 lat (usec): min=178, max=6452, avg=256.95, stdev=159.80 00:13:30.987 clat percentiles (usec): 00:13:30.987 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:13:30.987 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 208], 00:13:30.987 | 70.00th=[ 223], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 429], 00:13:30.987 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 996], 00:13:30.987 | 99.99th=[ 6456] 00:13:30.987 write: IOPS=2511, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec); 0 zone resets 00:13:30.987 slat (usec): min=12, max=104, avg=22.50, stdev= 6.05 00:13:30.987 clat (usec): min=112, max=1041, avg=162.53, stdev=44.45 00:13:30.987 lat (usec): min=134, max=1063, avg=185.04, stdev=44.65 00:13:30.987 clat percentiles (usec): 00:13:30.987 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 137], 00:13:30.987 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:13:30.988 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 219], 95.00th=[ 260], 00:13:30.988 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 412], 99.95th=[ 553], 00:13:30.988 | 99.99th=[ 1045] 00:13:30.988 bw ( KiB/s): min=12263, max=12263, per=31.69%, avg=12263.00, stdev= 0.00, samples=1 00:13:30.988 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:13:30.988 lat (usec) : 250=85.40%, 500=14.47%, 750=0.07%, 1000=0.02% 00:13:30.988 lat (msec) : 2=0.02%, 10=0.02% 00:13:30.988 cpu : usr=2.10%, sys=7.00%, ctx=4564, majf=0, minf=10 00:13:30.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 issued rwts: total=2048,2514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.988 job2: (groupid=0, jobs=1): err= 0: pid=71313: Sat Jul 13 03:02:37 2024 00:13:30.988 read: IOPS=2343, BW=9375KiB/s (9600kB/s)(9384KiB/1001msec) 00:13:30.988 slat (nsec): min=12590, max=44035, avg=14628.33, stdev=3305.55 00:13:30.988 clat (usec): min=174, max=990, avg=207.87, stdev=26.05 00:13:30.988 lat (usec): min=187, max=1010, avg=222.50, stdev=26.39 00:13:30.988 clat percentiles (usec): 00:13:30.988 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:13:30.988 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:13:30.988 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 237], 00:13:30.988 | 99.00th=[ 285], 99.50th=[ 330], 99.90th=[ 457], 99.95th=[ 498], 00:13:30.988 | 99.99th=[ 988] 00:13:30.988 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:30.988 slat (nsec): min=14851, max=87407, avg=22039.57, stdev=5541.59 00:13:30.988 clat (usec): min=125, max=2119, avg=161.40, stdev=59.23 00:13:30.988 lat (usec): min=144, max=2152, avg=183.44, stdev=59.99 00:13:30.988 clat percentiles (usec): 00:13:30.988 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:13:30.988 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:13:30.988 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:13:30.988 | 99.00th=[ 210], 99.50th=[ 375], 99.90th=[ 947], 99.95th=[ 1696], 00:13:30.988 | 99.99th=[ 2114] 00:13:30.988 bw ( KiB/s): min=11672, max=11672, per=30.17%, avg=11672.00, stdev= 0.00, samples=1 00:13:30.988 iops : min= 2918, max= 2918, avg=2918.00, stdev= 0.00, samples=1 00:13:30.988 lat (usec) : 250=98.78%, 500=1.04%, 750=0.10%, 1000=0.04% 00:13:30.988 lat (msec) : 2=0.02%, 4=0.02% 00:13:30.988 cpu : usr=2.30%, sys=6.90%, ctx=4909, majf=0, minf=7 00:13:30.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 issued rwts: total=2346,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.988 job3: (groupid=0, jobs=1): err= 0: pid=71314: Sat Jul 13 03:02:37 2024 00:13:30.988 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:30.988 slat (nsec): min=9700, max=66309, avg=16784.83, stdev=4134.60 00:13:30.988 clat (usec): min=182, max=966, avg=258.94, stdev=69.97 00:13:30.988 lat (usec): min=196, max=983, avg=275.73, stdev=69.82 00:13:30.988 clat percentiles (usec): 00:13:30.988 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 210], 00:13:30.988 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:13:30.988 | 70.00th=[ 251], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 416], 00:13:30.988 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[ 506], 00:13:30.988 | 99.99th=[ 971] 00:13:30.988 write: IOPS=2046, BW=8188KiB/s (8384kB/s)(8196KiB/1001msec); 0 zone resets 00:13:30.988 slat (usec): min=13, max=232, avg=24.47, stdev= 7.11 00:13:30.988 clat (usec): min=22, max=5157, avg=184.12, stdev=118.79 00:13:30.988 lat (usec): min=152, max=5175, avg=208.59, stdev=118.65 00:13:30.988 clat percentiles (usec): 00:13:30.988 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:13:30.988 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 178], 00:13:30.988 | 70.00th=[ 184], 80.00th=[ 198], 90.00th=[ 229], 95.00th=[ 269], 00:13:30.988 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 750], 99.95th=[ 1303], 00:13:30.988 | 99.99th=[ 5145] 00:13:30.988 bw ( KiB/s): min=10299, max=10299, per=26.62%, avg=10299.00, stdev= 0.00, samples=1 00:13:30.988 iops : min= 2574, max= 2574, avg=2574.00, stdev= 0.00, samples=1 00:13:30.988 lat (usec) : 50=0.02%, 250=81.16%, 500=18.70%, 750=0.02%, 1000=0.05% 00:13:30.988 lat (msec) : 2=0.02%, 10=0.02% 00:13:30.988 cpu : usr=1.90%, sys=6.60%, ctx=4099, majf=0, minf=13 00:13:30.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:30.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.988 issued rwts: total=2048,2049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:30.988 00:13:30.988 Run status group 0 (all jobs): 00:13:30.988 READ: bw=34.7MiB/s (36.4MB/s), 8184KiB/s-9822KiB/s (8380kB/s-10.1MB/s), io=34.8MiB (36.5MB), run=1001-1001msec 00:13:30.988 WRITE: bw=37.8MiB/s (39.6MB/s), 8188KiB/s-9.99MiB/s (8384kB/s-10.5MB/s), io=37.8MiB (39.7MB), run=1001-1001msec 00:13:30.988 00:13:30.988 Disk stats (read/write): 00:13:30.988 nvme0n1: ios=2098/2261, merge=0/0, ticks=447/369, in_queue=816, util=88.37% 00:13:30.988 nvme0n2: ios=2039/2048, merge=0/0, ticks=474/311, in_queue=785, util=88.88% 00:13:30.988 nvme0n3: ios=2075/2173, merge=0/0, ticks=486/374, in_queue=860, util=89.91% 00:13:30.988 nvme0n4: ios=1731/2048, merge=0/0, ticks=435/387, in_queue=822, util=90.06% 00:13:30.988 03:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:30.988 [global] 00:13:30.988 thread=1 00:13:30.988 invalidate=1 00:13:30.988 rw=write 00:13:30.988 time_based=1 00:13:30.988 runtime=1 00:13:30.988 ioengine=libaio 00:13:30.988 direct=1 00:13:30.988 bs=4096 00:13:30.988 iodepth=128 00:13:30.988 norandommap=0 00:13:30.988 numjobs=1 00:13:30.988 00:13:30.988 verify_dump=1 00:13:30.988 verify_backlog=512 00:13:30.988 verify_state_save=0 00:13:30.988 do_verify=1 00:13:30.988 verify=crc32c-intel 00:13:30.988 [job0] 00:13:30.988 filename=/dev/nvme0n1 00:13:30.988 [job1] 00:13:30.988 filename=/dev/nvme0n2 00:13:30.988 [job2] 00:13:30.988 filename=/dev/nvme0n3 00:13:30.988 [job3] 00:13:30.988 filename=/dev/nvme0n4 00:13:30.988 Could not set queue depth (nvme0n1) 00:13:30.988 Could not set queue depth (nvme0n2) 00:13:30.988 Could not set queue depth (nvme0n3) 00:13:30.988 Could not set queue depth (nvme0n4) 00:13:30.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.988 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.988 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.988 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:30.988 fio-3.35 00:13:30.988 Starting 4 threads 00:13:32.370 00:13:32.370 job0: (groupid=0, jobs=1): err= 0: pid=71371: Sat Jul 13 03:02:38 2024 00:13:32.370 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:32.370 slat (usec): min=7, max=9056, avg=130.32, stdev=597.78 00:13:32.370 clat (usec): min=9641, max=62620, avg=17224.54, stdev=11154.37 00:13:32.370 lat (usec): min=9670, max=63044, avg=17354.86, stdev=11238.51 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[10683], 5.00th=[11469], 10.00th=[12125], 20.00th=[12256], 00:13:32.370 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:13:32.370 | 70.00th=[13173], 80.00th=[14615], 90.00th=[39584], 95.00th=[45351], 00:13:32.370 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:13:32.370 | 99.99th=[62653] 00:13:32.370 write: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:13:32.370 slat (usec): min=6, max=12000, avg=129.22, stdev=707.60 00:13:32.370 clat (usec): min=356, max=39870, avg=16343.14, stdev=8165.47 00:13:32.370 lat (usec): min=2920, max=39893, avg=16472.36, stdev=8219.95 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[ 3982], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:13:32.370 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:13:32.370 | 70.00th=[13173], 80.00th=[25822], 90.00th=[32637], 95.00th=[33424], 00:13:32.370 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:13:32.370 | 99.99th=[40109] 00:13:32.370 bw ( KiB/s): min= 9168, max= 9168, per=17.35%, avg=9168.00, stdev= 0.00, samples=1 00:13:32.370 iops : min= 2292, max= 2292, avg=2292.00, stdev= 0.00, samples=1 00:13:32.370 lat (usec) : 500=0.01% 00:13:32.370 lat (msec) : 4=0.51%, 10=0.90%, 20=77.96%, 50=19.10%, 100=1.51% 00:13:32.370 cpu : usr=3.50%, sys=11.80%, ctx=320, majf=0, minf=4 00:13:32.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:32.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.370 issued rwts: total=3584,3881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.370 job1: (groupid=0, jobs=1): err= 0: pid=71374: Sat Jul 13 03:02:38 2024 00:13:32.370 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8160KiB/1006msec) 00:13:32.370 slat (usec): min=8, max=16987, avg=243.52, stdev=1283.43 00:13:32.370 clat (usec): min=1828, max=71075, avg=29945.27, stdev=10871.31 00:13:32.370 lat (usec): min=5781, max=71093, avg=30188.79, stdev=10998.42 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[ 6259], 5.00th=[18744], 10.00th=[22152], 20.00th=[22414], 00:13:32.370 | 30.00th=[22676], 40.00th=[23200], 50.00th=[26084], 60.00th=[30802], 00:13:32.370 | 70.00th=[33817], 80.00th=[39584], 90.00th=[45351], 95.00th=[54264], 00:13:32.370 | 99.00th=[56361], 99.50th=[56361], 99.90th=[61604], 99.95th=[62653], 00:13:32.370 | 99.99th=[70779] 00:13:32.370 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:13:32.370 slat (usec): min=9, max=12542, avg=238.25, stdev=1078.86 00:13:32.370 clat (usec): min=14175, max=67556, avg=32121.28, stdev=12414.26 00:13:32.370 lat (usec): min=14204, max=67584, avg=32359.53, stdev=12501.82 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[14615], 5.00th=[15664], 10.00th=[18744], 20.00th=[19530], 00:13:32.370 | 30.00th=[25560], 40.00th=[26608], 50.00th=[29230], 60.00th=[33162], 00:13:32.370 | 70.00th=[34341], 80.00th=[41681], 90.00th=[50594], 95.00th=[57410], 00:13:32.370 | 99.00th=[64750], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:13:32.370 | 99.99th=[67634] 00:13:32.370 bw ( KiB/s): min= 8192, max= 8208, per=15.52%, avg=8200.00, stdev=11.31, samples=2 00:13:32.370 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:13:32.370 lat (msec) : 2=0.02%, 10=1.52%, 20=13.38%, 50=76.52%, 100=8.56% 00:13:32.370 cpu : usr=2.59%, sys=6.17%, ctx=222, majf=0, minf=3 00:13:32.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:32.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.370 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.370 job2: (groupid=0, jobs=1): err= 0: pid=71376: Sat Jul 13 03:02:38 2024 00:13:32.370 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:13:32.370 slat (usec): min=9, max=3808, avg=102.73, stdev=482.33 00:13:32.370 clat (usec): min=9665, max=15395, avg=13694.15, stdev=1039.58 00:13:32.370 lat (usec): min=11609, max=15412, avg=13796.87, stdev=932.42 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[10552], 5.00th=[12256], 10.00th=[12387], 20.00th=[12649], 00:13:32.370 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14091], 60.00th=[14353], 00:13:32.370 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14746], 95.00th=[15008], 00:13:32.370 | 99.00th=[15270], 99.50th=[15270], 99.90th=[15270], 99.95th=[15401], 00:13:32.370 | 99.99th=[15401] 00:13:32.370 write: IOPS=4791, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1002msec); 0 zone resets 00:13:32.370 slat (usec): min=12, max=3693, avg=101.71, stdev=426.97 00:13:32.370 clat (usec): min=277, max=15296, avg=13213.51, stdev=1442.23 00:13:32.370 lat (usec): min=3117, max=15317, avg=13315.22, stdev=1379.08 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[ 6915], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:13:32.370 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13698], 60.00th=[13960], 00:13:32.370 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14353], 95.00th=[14615], 00:13:32.370 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:13:32.370 | 99.99th=[15270] 00:13:32.370 bw ( KiB/s): min=16904, max=20521, per=35.41%, avg=18712.50, stdev=2557.61, samples=2 00:13:32.370 iops : min= 4226, max= 5130, avg=4678.00, stdev=639.22, samples=2 00:13:32.370 lat (usec) : 500=0.01% 00:13:32.370 lat (msec) : 4=0.34%, 10=0.81%, 20=98.84% 00:13:32.370 cpu : usr=5.00%, sys=14.09%, ctx=306, majf=0, minf=5 00:13:32.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:32.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.370 issued rwts: total=4608,4801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.370 job3: (groupid=0, jobs=1): err= 0: pid=71377: Sat Jul 13 03:02:38 2024 00:13:32.370 read: IOPS=2259, BW=9038KiB/s (9255kB/s)(9092KiB/1006msec) 00:13:32.370 slat (usec): min=13, max=14923, avg=232.41, stdev=1302.63 00:13:32.370 clat (usec): min=1388, max=54037, avg=27748.85, stdev=9212.80 00:13:32.370 lat (usec): min=9751, max=54059, avg=27981.26, stdev=9213.26 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[10290], 5.00th=[18482], 10.00th=[20055], 20.00th=[20841], 00:13:32.370 | 30.00th=[21103], 40.00th=[23987], 50.00th=[25822], 60.00th=[27395], 00:13:32.370 | 70.00th=[30016], 80.00th=[31851], 90.00th=[40109], 95.00th=[52691], 00:13:32.370 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:13:32.370 | 99.99th=[54264] 00:13:32.370 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:13:32.370 slat (usec): min=14, max=12083, avg=176.64, stdev=910.86 00:13:32.370 clat (usec): min=12523, max=49371, avg=24655.22, stdev=8154.34 00:13:32.370 lat (usec): min=16207, max=49444, avg=24831.86, stdev=8139.18 00:13:32.370 clat percentiles (usec): 00:13:32.370 | 1.00th=[14484], 5.00th=[16450], 10.00th=[16581], 20.00th=[17171], 00:13:32.370 | 30.00th=[18220], 40.00th=[20055], 50.00th=[20841], 60.00th=[25035], 00:13:32.370 | 70.00th=[30540], 80.00th=[32900], 90.00th=[34341], 95.00th=[38536], 00:13:32.370 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:13:32.370 | 99.99th=[49546] 00:13:32.370 bw ( KiB/s): min= 8200, max=12280, per=19.38%, avg=10240.00, stdev=2885.00, samples=2 00:13:32.370 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:13:32.370 lat (msec) : 2=0.02%, 10=0.23%, 20=26.36%, 50=70.18%, 100=3.21% 00:13:32.370 cpu : usr=2.59%, sys=8.66%, ctx=152, majf=0, minf=1 00:13:32.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:32.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.370 issued rwts: total=2273,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.370 00:13:32.370 Run status group 0 (all jobs): 00:13:32.370 READ: bw=48.6MiB/s (50.9MB/s), 8111KiB/s-18.0MiB/s (8306kB/s-18.8MB/s), io=48.8MiB (51.2MB), run=1001-1006msec 00:13:32.370 WRITE: bw=51.6MiB/s (54.1MB/s), 8143KiB/s-18.7MiB/s (8339kB/s-19.6MB/s), io=51.9MiB (54.4MB), run=1001-1006msec 00:13:32.370 00:13:32.370 Disk stats (read/write): 00:13:32.370 nvme0n1: ios=3052/3072, merge=0/0, ticks=21517/20946, in_queue=42463, util=86.87% 00:13:32.370 nvme0n2: ios=1585/1967, merge=0/0, ticks=23642/27955, in_queue=51597, util=88.90% 00:13:32.370 nvme0n3: ios=4124/4128, merge=0/0, ticks=12403/11358, in_queue=23761, util=90.47% 00:13:32.370 nvme0n4: ios=2075/2080, merge=0/0, ticks=14599/11151, in_queue=25750, util=90.62% 00:13:32.370 03:02:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:32.370 [global] 00:13:32.370 thread=1 00:13:32.370 invalidate=1 00:13:32.370 rw=randwrite 00:13:32.370 time_based=1 00:13:32.370 runtime=1 00:13:32.370 ioengine=libaio 00:13:32.370 direct=1 00:13:32.371 bs=4096 00:13:32.371 iodepth=128 00:13:32.371 norandommap=0 00:13:32.371 numjobs=1 00:13:32.371 00:13:32.371 verify_dump=1 00:13:32.371 verify_backlog=512 00:13:32.371 verify_state_save=0 00:13:32.371 do_verify=1 00:13:32.371 verify=crc32c-intel 00:13:32.371 [job0] 00:13:32.371 filename=/dev/nvme0n1 00:13:32.371 [job1] 00:13:32.371 filename=/dev/nvme0n2 00:13:32.371 [job2] 00:13:32.371 filename=/dev/nvme0n3 00:13:32.371 [job3] 00:13:32.371 filename=/dev/nvme0n4 00:13:32.371 Could not set queue depth (nvme0n1) 00:13:32.371 Could not set queue depth (nvme0n2) 00:13:32.371 Could not set queue depth (nvme0n3) 00:13:32.371 Could not set queue depth (nvme0n4) 00:13:32.371 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.371 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.371 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.371 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:32.371 fio-3.35 00:13:32.371 Starting 4 threads 00:13:33.745 00:13:33.745 job0: (groupid=0, jobs=1): err= 0: pid=71430: Sat Jul 13 03:02:39 2024 00:13:33.745 read: IOPS=2219, BW=8877KiB/s (9090kB/s)(8948KiB/1008msec) 00:13:33.745 slat (usec): min=12, max=14611, avg=205.58, stdev=958.12 00:13:33.745 clat (usec): min=414, max=57732, avg=24445.66, stdev=6656.82 00:13:33.745 lat (usec): min=7483, max=57753, avg=24651.24, stdev=6713.31 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[ 7963], 5.00th=[17695], 10.00th=[20055], 20.00th=[21103], 00:13:33.745 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22938], 00:13:33.745 | 70.00th=[25035], 80.00th=[31065], 90.00th=[31589], 95.00th=[33424], 00:13:33.745 | 99.00th=[50594], 99.50th=[53740], 99.90th=[57934], 99.95th=[57934], 00:13:33.745 | 99.99th=[57934] 00:13:33.745 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:13:33.745 slat (usec): min=14, max=7568, avg=204.21, stdev=828.86 00:13:33.745 clat (usec): min=10659, max=70076, avg=28320.57, stdev=15695.68 00:13:33.745 lat (usec): min=10697, max=70111, avg=28524.78, stdev=15796.29 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[13304], 5.00th=[14484], 10.00th=[14615], 20.00th=[15008], 00:13:33.745 | 30.00th=[15401], 40.00th=[17433], 50.00th=[17957], 60.00th=[24773], 00:13:33.745 | 70.00th=[38536], 80.00th=[44827], 90.00th=[53216], 95.00th=[57934], 00:13:33.745 | 99.00th=[62129], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:13:33.745 | 99.99th=[69731] 00:13:33.745 bw ( KiB/s): min= 8192, max=12288, per=17.45%, avg=10240.00, stdev=2896.31, samples=2 00:13:33.745 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:13:33.745 lat (usec) : 500=0.02% 00:13:33.745 lat (msec) : 10=1.31%, 20=32.29%, 50=58.43%, 100=7.94% 00:13:33.745 cpu : usr=2.98%, sys=8.34%, ctx=244, majf=0, minf=11 00:13:33.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:33.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.745 issued rwts: total=2237,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.745 job1: (groupid=0, jobs=1): err= 0: pid=71431: Sat Jul 13 03:02:39 2024 00:13:33.745 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:13:33.745 slat (usec): min=11, max=14923, avg=208.78, stdev=1186.14 00:13:33.745 clat (usec): min=13403, max=58087, avg=26247.40, stdev=10221.24 00:13:33.745 lat (usec): min=16165, max=58103, avg=26456.17, stdev=10242.25 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[15401], 5.00th=[16581], 10.00th=[18220], 20.00th=[19006], 00:13:33.745 | 30.00th=[19268], 40.00th=[19530], 50.00th=[23200], 60.00th=[26870], 00:13:33.745 | 70.00th=[28967], 80.00th=[30016], 90.00th=[41157], 95.00th=[49546], 00:13:33.745 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:13:33.745 | 99.99th=[57934] 00:13:33.745 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec); 0 zone resets 00:13:33.745 slat (usec): min=13, max=8024, avg=147.55, stdev=722.87 00:13:33.745 clat (usec): min=773, max=45978, avg=19551.93, stdev=6476.67 00:13:33.745 lat (usec): min=7189, max=46015, avg=19699.48, stdev=6460.98 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[ 8094], 5.00th=[14746], 10.00th=[15008], 20.00th=[15270], 00:13:33.745 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16712], 60.00th=[19268], 00:13:33.745 | 70.00th=[20841], 80.00th=[22938], 90.00th=[30278], 95.00th=[35390], 00:13:33.745 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:13:33.745 | 99.99th=[45876] 00:13:33.745 bw ( KiB/s): min= 8192, max=14856, per=19.64%, avg=11524.00, stdev=4712.16, samples=2 00:13:33.745 iops : min= 2048, max= 3714, avg=2881.00, stdev=1178.04, samples=2 00:13:33.745 lat (usec) : 1000=0.02% 00:13:33.745 lat (msec) : 10=0.57%, 20=55.11%, 50=42.07%, 100=2.23% 00:13:33.745 cpu : usr=2.99%, sys=9.35%, ctx=182, majf=0, minf=17 00:13:33.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:33.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.745 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.745 job2: (groupid=0, jobs=1): err= 0: pid=71432: Sat Jul 13 03:02:39 2024 00:13:33.745 read: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1003msec) 00:13:33.745 slat (usec): min=11, max=3426, avg=111.30, stdev=521.58 00:13:33.745 clat (usec): min=322, max=16977, avg=14762.97, stdev=1192.50 00:13:33.745 lat (usec): min=3630, max=16991, avg=14874.27, stdev=1068.39 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[11469], 5.00th=[14222], 10.00th=[14353], 20.00th=[14484], 00:13:33.745 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:13:33.745 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15401], 95.00th=[15664], 00:13:33.745 | 99.00th=[16188], 99.50th=[16712], 99.90th=[16909], 99.95th=[16909], 00:13:33.745 | 99.99th=[16909] 00:13:33.745 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:13:33.745 slat (usec): min=12, max=3687, avg=109.24, stdev=463.55 00:13:33.745 clat (usec): min=6902, max=16607, avg=14234.79, stdev=898.85 00:13:33.745 lat (usec): min=6926, max=17002, avg=14344.02, stdev=777.44 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[10945], 5.00th=[13698], 10.00th=[13829], 20.00th=[14091], 00:13:33.745 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14353], 00:13:33.745 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14746], 95.00th=[15139], 00:13:33.745 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:13:33.745 | 99.99th=[16581] 00:13:33.745 bw ( KiB/s): min=17672, max=18432, per=30.77%, avg=18052.00, stdev=537.40, samples=2 00:13:33.745 iops : min= 4418, max= 4608, avg=4513.00, stdev=134.35, samples=2 00:13:33.745 lat (usec) : 500=0.01% 00:13:33.745 lat (msec) : 4=0.27%, 10=0.46%, 20=99.26% 00:13:33.745 cpu : usr=5.59%, sys=12.87%, ctx=280, majf=0, minf=3 00:13:33.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:33.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.745 issued rwts: total=4129,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.745 job3: (groupid=0, jobs=1): err= 0: pid=71433: Sat Jul 13 03:02:39 2024 00:13:33.745 read: IOPS=4315, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1002msec) 00:13:33.745 slat (usec): min=6, max=6449, avg=112.15, stdev=570.29 00:13:33.745 clat (usec): min=848, max=20998, avg=14401.88, stdev=1747.24 00:13:33.745 lat (usec): min=2712, max=25216, avg=14514.03, stdev=1786.83 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[ 7767], 5.00th=[11863], 10.00th=[13042], 20.00th=[13960], 00:13:33.745 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:13:33.745 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16909], 00:13:33.745 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20579], 99.95th=[20841], 00:13:33.745 | 99.99th=[21103] 00:13:33.745 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:33.745 slat (usec): min=11, max=7634, avg=103.73, stdev=583.25 00:13:33.745 clat (usec): min=7164, max=23394, avg=13949.33, stdev=1521.94 00:13:33.745 lat (usec): min=7211, max=23487, avg=14053.06, stdev=1616.83 00:13:33.745 clat percentiles (usec): 00:13:33.745 | 1.00th=[10159], 5.00th=[12256], 10.00th=[12780], 20.00th=[13042], 00:13:33.745 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:13:33.745 | 70.00th=[14222], 80.00th=[15008], 90.00th=[15795], 95.00th=[16712], 00:13:33.745 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20841], 99.95th=[21365], 00:13:33.745 | 99.99th=[23462] 00:13:33.745 bw ( KiB/s): min=18208, max=18656, per=31.42%, avg=18432.00, stdev=316.78, samples=2 00:13:33.745 iops : min= 4552, max= 4664, avg=4608.00, stdev=79.20, samples=2 00:13:33.745 lat (usec) : 1000=0.01% 00:13:33.745 lat (msec) : 4=0.22%, 10=1.22%, 20=97.97%, 50=0.57% 00:13:33.745 cpu : usr=4.40%, sys=13.29%, ctx=304, majf=0, minf=10 00:13:33.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:33.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.745 issued rwts: total=4324,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.745 00:13:33.745 Run status group 0 (all jobs): 00:13:33.745 READ: bw=51.3MiB/s (53.8MB/s), 8877KiB/s-16.9MiB/s (9090kB/s-17.7MB/s), io=51.8MiB (54.3MB), run=1002-1008msec 00:13:33.745 WRITE: bw=57.3MiB/s (60.1MB/s), 9.92MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=57.8MiB (60.6MB), run=1002-1008msec 00:13:33.745 00:13:33.746 Disk stats (read/write): 00:13:33.746 nvme0n1: ios=2098/2239, merge=0/0, ticks=24498/25868, in_queue=50366, util=87.78% 00:13:33.746 nvme0n2: ios=2086/2528, merge=0/0, ticks=14162/10933, in_queue=25095, util=88.24% 00:13:33.746 nvme0n3: ios=3601/3904, merge=0/0, ticks=11771/11818, in_queue=23589, util=89.46% 00:13:33.746 nvme0n4: ios=3584/4095, merge=0/0, ticks=24546/24258, in_queue=48804, util=89.60% 00:13:33.746 03:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:33.746 03:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71446 00:13:33.746 03:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:33.746 03:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:33.746 [global] 00:13:33.746 thread=1 00:13:33.746 invalidate=1 00:13:33.746 rw=read 00:13:33.746 time_based=1 00:13:33.746 runtime=10 00:13:33.746 ioengine=libaio 00:13:33.746 direct=1 00:13:33.746 bs=4096 00:13:33.746 iodepth=1 00:13:33.746 norandommap=1 00:13:33.746 numjobs=1 00:13:33.746 00:13:33.746 [job0] 00:13:33.746 filename=/dev/nvme0n1 00:13:33.746 [job1] 00:13:33.746 filename=/dev/nvme0n2 00:13:33.746 [job2] 00:13:33.746 filename=/dev/nvme0n3 00:13:33.746 [job3] 00:13:33.746 filename=/dev/nvme0n4 00:13:33.746 Could not set queue depth (nvme0n1) 00:13:33.746 Could not set queue depth (nvme0n2) 00:13:33.746 Could not set queue depth (nvme0n3) 00:13:33.746 Could not set queue depth (nvme0n4) 00:13:33.746 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.746 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.746 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.746 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.746 fio-3.35 00:13:33.746 Starting 4 threads 00:13:37.026 03:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:37.026 fio: pid=71489, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.026 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35000320, buflen=4096 00:13:37.026 03:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:37.026 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=55173120, buflen=4096 00:13:37.026 fio: pid=71488, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.026 03:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.026 03:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:37.284 fio: pid=71486, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.284 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=41852928, buflen=4096 00:13:37.284 03:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.284 03:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:37.542 fio: pid=71487, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:37.542 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5832704, buflen=4096 00:13:37.542 00:13:37.542 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71486: Sat Jul 13 03:02:43 2024 00:13:37.542 read: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(39.9MiB/3433msec) 00:13:37.542 slat (usec): min=11, max=14656, avg=23.04, stdev=227.61 00:13:37.542 clat (usec): min=152, max=2646, avg=310.86, stdev=50.70 00:13:37.542 lat (usec): min=167, max=14932, avg=333.90, stdev=232.78 00:13:37.542 clat percentiles (usec): 00:13:37.542 | 1.00th=[ 184], 5.00th=[ 251], 10.00th=[ 269], 20.00th=[ 293], 00:13:37.542 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:13:37.542 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 351], 00:13:37.542 | 99.00th=[ 461], 99.50th=[ 502], 99.90th=[ 603], 99.95th=[ 709], 00:13:37.542 | 99.99th=[ 2278] 00:13:37.542 bw ( KiB/s): min=11544, max=11976, per=22.31%, avg=11741.33, stdev=142.54, samples=6 00:13:37.542 iops : min= 2886, max= 2994, avg=2935.33, stdev=35.64, samples=6 00:13:37.542 lat (usec) : 250=4.93%, 500=94.53%, 750=0.49%, 1000=0.01% 00:13:37.542 lat (msec) : 2=0.01%, 4=0.02% 00:13:37.542 cpu : usr=1.54%, sys=4.66%, ctx=10228, majf=0, minf=1 00:13:37.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 issued rwts: total=10219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.542 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71487: Sat Jul 13 03:02:43 2024 00:13:37.542 read: IOPS=4682, BW=18.3MiB/s (19.2MB/s)(69.6MiB/3803msec) 00:13:37.542 slat (usec): min=11, max=15749, avg=18.69, stdev=203.56 00:13:37.542 clat (usec): min=145, max=3535, avg=193.36, stdev=51.61 00:13:37.542 lat (usec): min=158, max=15998, avg=212.05, stdev=211.16 00:13:37.542 clat percentiles (usec): 00:13:37.542 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:13:37.542 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:13:37.542 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 262], 00:13:37.542 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 519], 99.95th=[ 1237], 00:13:37.542 | 99.99th=[ 2245] 00:13:37.542 bw ( KiB/s): min=14738, max=20040, per=35.66%, avg=18767.57, stdev=1851.84, samples=7 00:13:37.542 iops : min= 3684, max= 5010, avg=4691.71, stdev=463.15, samples=7 00:13:37.542 lat (usec) : 250=93.11%, 500=6.77%, 750=0.04%, 1000=0.01% 00:13:37.542 lat (msec) : 2=0.04%, 4=0.01% 00:13:37.542 cpu : usr=1.76%, sys=6.23%, ctx=17818, majf=0, minf=1 00:13:37.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 issued rwts: total=17809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.542 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71488: Sat Jul 13 03:02:43 2024 00:13:37.542 read: IOPS=4204, BW=16.4MiB/s (17.2MB/s)(52.6MiB/3204msec) 00:13:37.542 slat (usec): min=12, max=7786, avg=17.51, stdev=91.91 00:13:37.542 clat (usec): min=178, max=3413, avg=218.65, stdev=45.99 00:13:37.542 lat (usec): min=193, max=8016, avg=236.16, stdev=103.02 00:13:37.542 clat percentiles (usec): 00:13:37.542 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:13:37.542 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:13:37.542 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 245], 00:13:37.542 | 99.00th=[ 289], 99.50th=[ 363], 99.90th=[ 465], 99.95th=[ 1090], 00:13:37.542 | 99.99th=[ 1860] 00:13:37.542 bw ( KiB/s): min=16336, max=17216, per=32.00%, avg=16842.00, stdev=360.60, samples=6 00:13:37.542 iops : min= 4084, max= 4304, avg=4210.50, stdev=90.15, samples=6 00:13:37.542 lat (usec) : 250=96.43%, 500=3.47%, 750=0.03%, 1000=0.01% 00:13:37.542 lat (msec) : 2=0.04%, 4=0.01% 00:13:37.542 cpu : usr=1.72%, sys=5.56%, ctx=13475, majf=0, minf=1 00:13:37.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 issued rwts: total=13471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.542 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71489: Sat Jul 13 03:02:43 2024 00:13:37.542 read: IOPS=2920, BW=11.4MiB/s (12.0MB/s)(33.4MiB/2926msec) 00:13:37.542 slat (usec): min=13, max=116, avg=18.47, stdev= 3.83 00:13:37.542 clat (usec): min=186, max=2892, avg=322.08, stdev=47.74 00:13:37.542 lat (usec): min=201, max=2910, avg=340.54, stdev=48.21 00:13:37.542 clat percentiles (usec): 00:13:37.542 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:13:37.542 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:13:37.542 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 351], 00:13:37.542 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 603], 99.95th=[ 783], 00:13:37.542 | 99.99th=[ 2900] 00:13:37.542 bw ( KiB/s): min=11528, max=11832, per=22.15%, avg=11656.00, stdev=109.84, samples=5 00:13:37.542 iops : min= 2882, max= 2958, avg=2914.00, stdev=27.46, samples=5 00:13:37.542 lat (usec) : 250=0.53%, 500=98.94%, 750=0.46%, 1000=0.04% 00:13:37.542 lat (msec) : 2=0.01%, 4=0.02% 00:13:37.542 cpu : usr=1.09%, sys=4.89%, ctx=8547, majf=0, minf=1 00:13:37.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:37.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.542 issued rwts: total=8546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:37.542 00:13:37.542 Run status group 0 (all jobs): 00:13:37.542 READ: bw=51.4MiB/s (53.9MB/s), 11.4MiB/s-18.3MiB/s (12.0MB/s-19.2MB/s), io=195MiB (205MB), run=2926-3803msec 00:13:37.542 00:13:37.542 Disk stats (read/write): 00:13:37.542 nvme0n1: ios=10001/0, merge=0/0, ticks=3149/0, in_queue=3149, util=95.16% 00:13:37.542 nvme0n2: ios=16842/0, merge=0/0, ticks=3318/0, in_queue=3318, util=95.26% 00:13:37.542 nvme0n3: ios=13111/0, merge=0/0, ticks=2928/0, in_queue=2928, util=96.37% 00:13:37.542 nvme0n4: ios=8376/0, merge=0/0, ticks=2750/0, in_queue=2750, util=96.79% 00:13:37.801 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:37.801 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:38.061 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.061 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:38.626 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.626 03:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:38.884 03:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:38.884 03:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:39.452 03:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.452 03:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 71446 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.710 nvmf hotplug test: fio failed as expected 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:39.710 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.969 rmmod nvme_tcp 00:13:39.969 rmmod nvme_fabrics 00:13:39.969 rmmod nvme_keyring 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71065 ']' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71065 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 71065 ']' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 71065 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71065 00:13:39.969 killing process with pid 71065 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71065' 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 71065 00:13:39.969 03:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 71065 00:13:41.343 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:41.344 ************************************ 00:13:41.344 END TEST nvmf_fio_target 00:13:41.344 ************************************ 00:13:41.344 00:13:41.344 real 0m21.150s 00:13:41.344 user 1m18.168s 00:13:41.344 sys 0m10.683s 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.344 03:02:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.344 03:02:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.344 03:02:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:41.344 03:02:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.344 03:02:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.344 03:02:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.344 ************************************ 00:13:41.344 START TEST nvmf_bdevio 00:13:41.344 ************************************ 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:41.344 * Looking for test storage... 00:13:41.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:41.344 Cannot find device "nvmf_tgt_br" 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.344 Cannot find device "nvmf_tgt_br2" 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:41.344 Cannot find device "nvmf_tgt_br" 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:41.344 Cannot find device "nvmf_tgt_br2" 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:41.344 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.345 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.602 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:41.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:13:41.603 00:13:41.603 --- 10.0.0.2 ping statistics --- 00:13:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.603 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:41.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:13:41.603 00:13:41.603 --- 10.0.0.3 ping statistics --- 00:13:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.603 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:13:41.603 00:13:41.603 --- 10.0.0.1 ping statistics --- 00:13:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.603 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=71774 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 71774 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 71774 ']' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.603 03:02:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:41.603 [2024-07-13 03:02:48.058472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:41.603 [2024-07-13 03:02:48.058634] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.861 [2024-07-13 03:02:48.232676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.120 [2024-07-13 03:02:48.389039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.120 [2024-07-13 03:02:48.389098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.120 [2024-07-13 03:02:48.389129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.120 [2024-07-13 03:02:48.389142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.120 [2024-07-13 03:02:48.389153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.120 [2024-07-13 03:02:48.389377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.120 [2024-07-13 03:02:48.390017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:42.120 [2024-07-13 03:02:48.390166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.120 [2024-07-13 03:02:48.390175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:42.120 [2024-07-13 03:02:48.562168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.688 03:02:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 [2024-07-13 03:02:48.965479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 Malloc0 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:42.688 [2024-07-13 03:02:49.080929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.688 { 00:13:42.688 "params": { 00:13:42.688 "name": "Nvme$subsystem", 00:13:42.688 "trtype": "$TEST_TRANSPORT", 00:13:42.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.688 "adrfam": "ipv4", 00:13:42.688 "trsvcid": "$NVMF_PORT", 00:13:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.688 "hdgst": ${hdgst:-false}, 00:13:42.688 "ddgst": ${ddgst:-false} 00:13:42.688 }, 00:13:42.688 "method": "bdev_nvme_attach_controller" 00:13:42.688 } 00:13:42.688 EOF 00:13:42.688 )") 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:42.688 03:02:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.688 "params": { 00:13:42.688 "name": "Nvme1", 00:13:42.688 "trtype": "tcp", 00:13:42.688 "traddr": "10.0.0.2", 00:13:42.688 "adrfam": "ipv4", 00:13:42.688 "trsvcid": "4420", 00:13:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.688 "hdgst": false, 00:13:42.688 "ddgst": false 00:13:42.688 }, 00:13:42.688 "method": "bdev_nvme_attach_controller" 00:13:42.688 }' 00:13:42.947 [2024-07-13 03:02:49.191999] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:42.947 [2024-07-13 03:02:49.192181] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71810 ] 00:13:42.947 [2024-07-13 03:02:49.366468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.204 [2024-07-13 03:02:49.598157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.204 [2024-07-13 03:02:49.598273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.204 [2024-07-13 03:02:49.598278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.462 [2024-07-13 03:02:49.785803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.719 I/O targets: 00:13:43.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:43.719 00:13:43.719 00:13:43.719 CUnit - A unit testing framework for C - Version 2.1-3 00:13:43.719 http://cunit.sourceforge.net/ 00:13:43.719 00:13:43.719 00:13:43.719 Suite: bdevio tests on: Nvme1n1 00:13:43.719 Test: blockdev write read block ...passed 00:13:43.719 Test: blockdev write zeroes read block ...passed 00:13:43.719 Test: blockdev write zeroes read no split ...passed 00:13:43.719 Test: blockdev write zeroes read split ...passed 00:13:43.719 Test: blockdev write zeroes read split partial ...passed 00:13:43.719 Test: blockdev reset ...[2024-07-13 03:02:50.050498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:43.719 [2024-07-13 03:02:50.050692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:43.719 passed 00:13:43.719 Test: blockdev write read 8 blocks ...[2024-07-13 03:02:50.062222] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:43.719 passed 00:13:43.719 Test: blockdev write read size > 128k ...passed 00:13:43.719 Test: blockdev write read invalid size ...passed 00:13:43.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.719 Test: blockdev write read max offset ...passed 00:13:43.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.719 Test: blockdev writev readv 8 blocks ...passed 00:13:43.719 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.719 Test: blockdev writev readv block ...passed 00:13:43.719 Test: blockdev writev readv size > 128k ...passed 00:13:43.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.719 Test: blockdev comparev and writev ...[2024-07-13 03:02:50.074135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.074220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.074289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.074656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.074714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.074734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.075135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.075176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.075204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.075227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.075667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.075728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:43.719 [2024-07-13 03:02:50.075756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:43.719 [2024-07-13 03:02:50.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:43.719 passed 00:13:43.719 Test: blockdev nvme passthru rw ...passed 00:13:43.719 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.719 Test: blockdev nvme admin passthru ...[2024-07-13 03:02:50.076846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.720 [2024-07-13 03:02:50.076908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:43.720 [2024-07-13 03:02:50.077057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.720 [2024-07-13 03:02:50.077087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:43.720 [2024-07-13 03:02:50.077236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.720 [2024-07-13 03:02:50.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:43.720 [2024-07-13 03:02:50.077436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:43.720 [2024-07-13 03:02:50.077465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:43.720 passed 00:13:43.720 Test: blockdev copy ...passed 00:13:43.720 00:13:43.720 Run Summary: Type Total Ran Passed Failed Inactive 00:13:43.720 suites 1 1 n/a 0 0 00:13:43.720 tests 23 23 23 0 0 00:13:43.720 asserts 152 152 152 0 n/a 00:13:43.720 00:13:43.720 Elapsed time = 0.302 seconds 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.654 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.654 rmmod nvme_tcp 00:13:44.654 rmmod nvme_fabrics 00:13:44.913 rmmod nvme_keyring 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 71774 ']' 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 71774 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 71774 ']' 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 71774 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.913 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71774 00:13:44.913 killing process with pid 71774 00:13:44.914 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:44.914 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:44.914 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71774' 00:13:44.914 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 71774 00:13:44.914 03:02:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 71774 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:46.293 00:13:46.293 real 0m4.901s 00:13:46.293 user 0m18.662s 00:13:46.293 sys 0m0.904s 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.293 03:02:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:46.293 ************************************ 00:13:46.293 END TEST nvmf_bdevio 00:13:46.293 ************************************ 00:13:46.293 03:02:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:46.293 03:02:52 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:46.293 03:02:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:46.293 03:02:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.293 03:02:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.293 ************************************ 00:13:46.293 START TEST nvmf_auth_target 00:13:46.293 ************************************ 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:46.293 * Looking for test storage... 00:13:46.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:46.293 Cannot find device "nvmf_tgt_br" 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.293 Cannot find device "nvmf_tgt_br2" 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:46.293 Cannot find device "nvmf_tgt_br" 00:13:46.293 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:46.294 Cannot find device "nvmf_tgt_br2" 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:46.294 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:46.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:13:46.552 00:13:46.552 --- 10.0.0.2 ping statistics --- 00:13:46.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.552 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:46.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:46.552 00:13:46.552 --- 10.0.0.3 ping statistics --- 00:13:46.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.552 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:46.552 00:13:46.552 --- 10.0.0.1 ping statistics --- 00:13:46.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.552 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72038 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72038 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72038 ']' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.552 03:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=72070 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a9a1d95114c616d1a030bb014cd9ad6b9de0108c98f22891 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.t9h 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a9a1d95114c616d1a030bb014cd9ad6b9de0108c98f22891 0 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a9a1d95114c616d1a030bb014cd9ad6b9de0108c98f22891 0 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a9a1d95114c616d1a030bb014cd9ad6b9de0108c98f22891 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:47.487 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.t9h 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.t9h 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.t9h 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:47.746 03:02:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7338d4f1165c1a10b86854f15d6f40b803bd5e8d2c756b82cb57721637ede593 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZOs 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7338d4f1165c1a10b86854f15d6f40b803bd5e8d2c756b82cb57721637ede593 3 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7338d4f1165c1a10b86854f15d6f40b803bd5e8d2c756b82cb57721637ede593 3 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7338d4f1165c1a10b86854f15d6f40b803bd5e8d2c756b82cb57721637ede593 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZOs 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZOs 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ZOs 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1bad155eaeebb3908c0fd6d3f7ca2d3d 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0Sl 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1bad155eaeebb3908c0fd6d3f7ca2d3d 1 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1bad155eaeebb3908c0fd6d3f7ca2d3d 1 00:13:47.746 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1bad155eaeebb3908c0fd6d3f7ca2d3d 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0Sl 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0Sl 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0Sl 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8277c892e59093ba85ab52790b111612b46677cc46de0a46 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Weo 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8277c892e59093ba85ab52790b111612b46677cc46de0a46 2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8277c892e59093ba85ab52790b111612b46677cc46de0a46 2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8277c892e59093ba85ab52790b111612b46677cc46de0a46 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Weo 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Weo 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Weo 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c2481b653dc6b2ae3650ee25c83194a9fbb1211397f4b188 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kIg 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c2481b653dc6b2ae3650ee25c83194a9fbb1211397f4b188 2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c2481b653dc6b2ae3650ee25c83194a9fbb1211397f4b188 2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c2481b653dc6b2ae3650ee25c83194a9fbb1211397f4b188 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:47.747 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kIg 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kIg 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.kIg 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0780d35e90e63e3e0ed1ab82b03cb382 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Irv 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0780d35e90e63e3e0ed1ab82b03cb382 1 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0780d35e90e63e3e0ed1ab82b03cb382 1 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0780d35e90e63e3e0ed1ab82b03cb382 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Irv 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Irv 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Irv 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e957fe2d56c52dddec14bcf2cfe56d09b3d61c45a36c1ffb5304c8785a58e5b8 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EEX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e957fe2d56c52dddec14bcf2cfe56d09b3d61c45a36c1ffb5304c8785a58e5b8 3 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e957fe2d56c52dddec14bcf2cfe56d09b3d61c45a36c1ffb5304c8785a58e5b8 3 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e957fe2d56c52dddec14bcf2cfe56d09b3d61c45a36c1ffb5304c8785a58e5b8 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EEX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EEX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.EEX 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 72038 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72038 ']' 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.006 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 72070 /var/tmp/host.sock 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72070 ']' 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:48.265 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.266 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:48.266 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.266 03:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.t9h 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.t9h 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.t9h 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ZOs ]] 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZOs 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZOs 00:13:48.834 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZOs 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0Sl 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0Sl 00:13:49.093 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0Sl 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Weo ]] 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Weo 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Weo 00:13:49.352 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Weo 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kIg 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kIg 00:13:49.610 03:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kIg 00:13:49.869 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Irv ]] 00:13:49.869 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Irv 00:13:49.869 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.870 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.870 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Irv 00:13:49.870 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Irv 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EEX 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EEX 00:13:50.128 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EEX 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.387 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.646 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:50.646 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.647 03:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.905 00:13:50.905 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.905 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.905 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.164 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.164 { 00:13:51.164 "cntlid": 1, 00:13:51.164 "qid": 0, 00:13:51.164 "state": "enabled", 00:13:51.164 "thread": "nvmf_tgt_poll_group_000", 00:13:51.164 "listen_address": { 00:13:51.164 "trtype": "TCP", 00:13:51.164 "adrfam": "IPv4", 00:13:51.164 "traddr": "10.0.0.2", 00:13:51.164 "trsvcid": "4420" 00:13:51.164 }, 00:13:51.164 "peer_address": { 00:13:51.164 "trtype": "TCP", 00:13:51.164 "adrfam": "IPv4", 00:13:51.164 "traddr": "10.0.0.1", 00:13:51.164 "trsvcid": "60092" 00:13:51.164 }, 00:13:51.165 "auth": { 00:13:51.165 "state": "completed", 00:13:51.165 "digest": "sha256", 00:13:51.165 "dhgroup": "null" 00:13:51.165 } 00:13:51.165 } 00:13:51.165 ]' 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.165 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.423 03:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.607 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.174 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.174 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.739 { 00:13:56.739 "cntlid": 3, 00:13:56.739 "qid": 0, 00:13:56.739 "state": "enabled", 00:13:56.739 "thread": "nvmf_tgt_poll_group_000", 00:13:56.739 "listen_address": { 00:13:56.739 "trtype": "TCP", 00:13:56.739 "adrfam": "IPv4", 00:13:56.739 "traddr": "10.0.0.2", 00:13:56.739 "trsvcid": "4420" 00:13:56.739 }, 00:13:56.739 "peer_address": { 00:13:56.739 "trtype": "TCP", 00:13:56.739 "adrfam": "IPv4", 00:13:56.739 "traddr": "10.0.0.1", 00:13:56.739 "trsvcid": "60110" 00:13:56.739 }, 00:13:56.739 "auth": { 00:13:56.739 "state": "completed", 00:13:56.739 "digest": "sha256", 00:13:56.739 "dhgroup": "null" 00:13:56.739 } 00:13:56.739 } 00:13:56.739 ]' 00:13:56.739 03:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.739 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.996 03:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.930 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.188 00:13:58.188 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.188 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.188 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.446 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.446 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.446 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.447 { 00:13:58.447 "cntlid": 5, 00:13:58.447 "qid": 0, 00:13:58.447 "state": "enabled", 00:13:58.447 "thread": "nvmf_tgt_poll_group_000", 00:13:58.447 "listen_address": { 00:13:58.447 "trtype": "TCP", 00:13:58.447 "adrfam": "IPv4", 00:13:58.447 "traddr": "10.0.0.2", 00:13:58.447 "trsvcid": "4420" 00:13:58.447 }, 00:13:58.447 "peer_address": { 00:13:58.447 "trtype": "TCP", 00:13:58.447 "adrfam": "IPv4", 00:13:58.447 "traddr": "10.0.0.1", 00:13:58.447 "trsvcid": "60128" 00:13:58.447 }, 00:13:58.447 "auth": { 00:13:58.447 "state": "completed", 00:13:58.447 "digest": "sha256", 00:13:58.447 "dhgroup": "null" 00:13:58.447 } 00:13:58.447 } 00:13:58.447 ]' 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.447 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.705 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:58.705 03:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.705 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.705 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.705 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.963 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.530 03:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.790 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.049 00:14:00.049 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.049 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.049 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.306 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.306 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.307 { 00:14:00.307 "cntlid": 7, 00:14:00.307 "qid": 0, 00:14:00.307 "state": "enabled", 00:14:00.307 "thread": "nvmf_tgt_poll_group_000", 00:14:00.307 "listen_address": { 00:14:00.307 "trtype": "TCP", 00:14:00.307 "adrfam": "IPv4", 00:14:00.307 "traddr": "10.0.0.2", 00:14:00.307 "trsvcid": "4420" 00:14:00.307 }, 00:14:00.307 "peer_address": { 00:14:00.307 "trtype": "TCP", 00:14:00.307 "adrfam": "IPv4", 00:14:00.307 "traddr": "10.0.0.1", 00:14:00.307 "trsvcid": "49504" 00:14:00.307 }, 00:14:00.307 "auth": { 00:14:00.307 "state": "completed", 00:14:00.307 "digest": "sha256", 00:14:00.307 "dhgroup": "null" 00:14:00.307 } 00:14:00.307 } 00:14:00.307 ]' 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:00.307 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.565 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.566 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.566 03:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.825 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.393 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.653 03:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.913 00:14:01.913 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.913 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.913 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.173 { 00:14:02.173 "cntlid": 9, 00:14:02.173 "qid": 0, 00:14:02.173 "state": "enabled", 00:14:02.173 "thread": "nvmf_tgt_poll_group_000", 00:14:02.173 "listen_address": { 00:14:02.173 "trtype": "TCP", 00:14:02.173 "adrfam": "IPv4", 00:14:02.173 "traddr": "10.0.0.2", 00:14:02.173 "trsvcid": "4420" 00:14:02.173 }, 00:14:02.173 "peer_address": { 00:14:02.173 "trtype": "TCP", 00:14:02.173 "adrfam": "IPv4", 00:14:02.173 "traddr": "10.0.0.1", 00:14:02.173 "trsvcid": "49536" 00:14:02.173 }, 00:14:02.173 "auth": { 00:14:02.173 "state": "completed", 00:14:02.173 "digest": "sha256", 00:14:02.173 "dhgroup": "ffdhe2048" 00:14:02.173 } 00:14:02.173 } 00:14:02.173 ]' 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.173 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.433 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:02.433 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.433 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.433 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.433 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.693 03:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.261 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.520 03:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.779 00:14:03.779 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.780 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.780 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.039 { 00:14:04.039 "cntlid": 11, 00:14:04.039 "qid": 0, 00:14:04.039 "state": "enabled", 00:14:04.039 "thread": "nvmf_tgt_poll_group_000", 00:14:04.039 "listen_address": { 00:14:04.039 "trtype": "TCP", 00:14:04.039 "adrfam": "IPv4", 00:14:04.039 "traddr": "10.0.0.2", 00:14:04.039 "trsvcid": "4420" 00:14:04.039 }, 00:14:04.039 "peer_address": { 00:14:04.039 "trtype": "TCP", 00:14:04.039 "adrfam": "IPv4", 00:14:04.039 "traddr": "10.0.0.1", 00:14:04.039 "trsvcid": "49564" 00:14:04.039 }, 00:14:04.039 "auth": { 00:14:04.039 "state": "completed", 00:14:04.039 "digest": "sha256", 00:14:04.039 "dhgroup": "ffdhe2048" 00:14:04.039 } 00:14:04.039 } 00:14:04.039 ]' 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.039 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.306 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.306 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.306 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.306 03:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:04.913 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.172 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.741 00:14:05.741 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.741 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.741 03:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.741 { 00:14:05.741 "cntlid": 13, 00:14:05.741 "qid": 0, 00:14:05.741 "state": "enabled", 00:14:05.741 "thread": "nvmf_tgt_poll_group_000", 00:14:05.741 "listen_address": { 00:14:05.741 "trtype": "TCP", 00:14:05.741 "adrfam": "IPv4", 00:14:05.741 "traddr": "10.0.0.2", 00:14:05.741 "trsvcid": "4420" 00:14:05.741 }, 00:14:05.741 "peer_address": { 00:14:05.741 "trtype": "TCP", 00:14:05.741 "adrfam": "IPv4", 00:14:05.741 "traddr": "10.0.0.1", 00:14:05.741 "trsvcid": "49590" 00:14:05.741 }, 00:14:05.741 "auth": { 00:14:05.741 "state": "completed", 00:14:05.741 "digest": "sha256", 00:14:05.741 "dhgroup": "ffdhe2048" 00:14:05.741 } 00:14:05.741 } 00:14:05.741 ]' 00:14:05.741 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.000 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.259 03:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.824 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.082 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.648 00:14:07.648 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.648 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.648 03:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.907 { 00:14:07.907 "cntlid": 15, 00:14:07.907 "qid": 0, 00:14:07.907 "state": "enabled", 00:14:07.907 "thread": "nvmf_tgt_poll_group_000", 00:14:07.907 "listen_address": { 00:14:07.907 "trtype": "TCP", 00:14:07.907 "adrfam": "IPv4", 00:14:07.907 "traddr": "10.0.0.2", 00:14:07.907 "trsvcid": "4420" 00:14:07.907 }, 00:14:07.907 "peer_address": { 00:14:07.907 "trtype": "TCP", 00:14:07.907 "adrfam": "IPv4", 00:14:07.907 "traddr": "10.0.0.1", 00:14:07.907 "trsvcid": "49614" 00:14:07.907 }, 00:14:07.907 "auth": { 00:14:07.907 "state": "completed", 00:14:07.907 "digest": "sha256", 00:14:07.907 "dhgroup": "ffdhe2048" 00:14:07.907 } 00:14:07.907 } 00:14:07.907 ]' 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.907 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.166 03:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:08.734 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.301 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.559 00:14:09.560 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.560 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.560 03:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.818 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.818 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.819 { 00:14:09.819 "cntlid": 17, 00:14:09.819 "qid": 0, 00:14:09.819 "state": "enabled", 00:14:09.819 "thread": "nvmf_tgt_poll_group_000", 00:14:09.819 "listen_address": { 00:14:09.819 "trtype": "TCP", 00:14:09.819 "adrfam": "IPv4", 00:14:09.819 "traddr": "10.0.0.2", 00:14:09.819 "trsvcid": "4420" 00:14:09.819 }, 00:14:09.819 "peer_address": { 00:14:09.819 "trtype": "TCP", 00:14:09.819 "adrfam": "IPv4", 00:14:09.819 "traddr": "10.0.0.1", 00:14:09.819 "trsvcid": "55552" 00:14:09.819 }, 00:14:09.819 "auth": { 00:14:09.819 "state": "completed", 00:14:09.819 "digest": "sha256", 00:14:09.819 "dhgroup": "ffdhe3072" 00:14:09.819 } 00:14:09.819 } 00:14:09.819 ]' 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.819 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.078 03:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.034 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.601 00:14:11.601 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.601 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.601 03:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.601 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.601 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.601 03:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.601 03:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.861 { 00:14:11.861 "cntlid": 19, 00:14:11.861 "qid": 0, 00:14:11.861 "state": "enabled", 00:14:11.861 "thread": "nvmf_tgt_poll_group_000", 00:14:11.861 "listen_address": { 00:14:11.861 "trtype": "TCP", 00:14:11.861 "adrfam": "IPv4", 00:14:11.861 "traddr": "10.0.0.2", 00:14:11.861 "trsvcid": "4420" 00:14:11.861 }, 00:14:11.861 "peer_address": { 00:14:11.861 "trtype": "TCP", 00:14:11.861 "adrfam": "IPv4", 00:14:11.861 "traddr": "10.0.0.1", 00:14:11.861 "trsvcid": "55576" 00:14:11.861 }, 00:14:11.861 "auth": { 00:14:11.861 "state": "completed", 00:14:11.861 "digest": "sha256", 00:14:11.861 "dhgroup": "ffdhe3072" 00:14:11.861 } 00:14:11.861 } 00:14:11.861 ]' 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.861 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.120 03:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.055 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.313 00:14:13.313 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.313 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.313 03:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.572 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.572 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.572 03:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.572 03:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.832 { 00:14:13.832 "cntlid": 21, 00:14:13.832 "qid": 0, 00:14:13.832 "state": "enabled", 00:14:13.832 "thread": "nvmf_tgt_poll_group_000", 00:14:13.832 "listen_address": { 00:14:13.832 "trtype": "TCP", 00:14:13.832 "adrfam": "IPv4", 00:14:13.832 "traddr": "10.0.0.2", 00:14:13.832 "trsvcid": "4420" 00:14:13.832 }, 00:14:13.832 "peer_address": { 00:14:13.832 "trtype": "TCP", 00:14:13.832 "adrfam": "IPv4", 00:14:13.832 "traddr": "10.0.0.1", 00:14:13.832 "trsvcid": "55588" 00:14:13.832 }, 00:14:13.832 "auth": { 00:14:13.832 "state": "completed", 00:14:13.832 "digest": "sha256", 00:14:13.832 "dhgroup": "ffdhe3072" 00:14:13.832 } 00:14:13.832 } 00:14:13.832 ]' 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.832 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.091 03:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.658 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.918 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:15.176 00:14:15.176 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.176 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.176 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.435 { 00:14:15.435 "cntlid": 23, 00:14:15.435 "qid": 0, 00:14:15.435 "state": "enabled", 00:14:15.435 "thread": "nvmf_tgt_poll_group_000", 00:14:15.435 "listen_address": { 00:14:15.435 "trtype": "TCP", 00:14:15.435 "adrfam": "IPv4", 00:14:15.435 "traddr": "10.0.0.2", 00:14:15.435 "trsvcid": "4420" 00:14:15.435 }, 00:14:15.435 "peer_address": { 00:14:15.435 "trtype": "TCP", 00:14:15.435 "adrfam": "IPv4", 00:14:15.435 "traddr": "10.0.0.1", 00:14:15.435 "trsvcid": "55616" 00:14:15.435 }, 00:14:15.435 "auth": { 00:14:15.435 "state": "completed", 00:14:15.435 "digest": "sha256", 00:14:15.435 "dhgroup": "ffdhe3072" 00:14:15.435 } 00:14:15.435 } 00:14:15.435 ]' 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.435 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.693 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.693 03:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.693 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.693 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.693 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.952 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:16.520 03:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.781 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.040 00:14:17.040 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.040 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.040 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.299 { 00:14:17.299 "cntlid": 25, 00:14:17.299 "qid": 0, 00:14:17.299 "state": "enabled", 00:14:17.299 "thread": "nvmf_tgt_poll_group_000", 00:14:17.299 "listen_address": { 00:14:17.299 "trtype": "TCP", 00:14:17.299 "adrfam": "IPv4", 00:14:17.299 "traddr": "10.0.0.2", 00:14:17.299 "trsvcid": "4420" 00:14:17.299 }, 00:14:17.299 "peer_address": { 00:14:17.299 "trtype": "TCP", 00:14:17.299 "adrfam": "IPv4", 00:14:17.299 "traddr": "10.0.0.1", 00:14:17.299 "trsvcid": "55636" 00:14:17.299 }, 00:14:17.299 "auth": { 00:14:17.299 "state": "completed", 00:14:17.299 "digest": "sha256", 00:14:17.299 "dhgroup": "ffdhe4096" 00:14:17.299 } 00:14:17.299 } 00:14:17.299 ]' 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.299 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.558 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:17.558 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.558 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.558 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.558 03:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.817 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:18.383 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.643 03:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.902 00:14:18.902 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.902 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.902 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.161 { 00:14:19.161 "cntlid": 27, 00:14:19.161 "qid": 0, 00:14:19.161 "state": "enabled", 00:14:19.161 "thread": "nvmf_tgt_poll_group_000", 00:14:19.161 "listen_address": { 00:14:19.161 "trtype": "TCP", 00:14:19.161 "adrfam": "IPv4", 00:14:19.161 "traddr": "10.0.0.2", 00:14:19.161 "trsvcid": "4420" 00:14:19.161 }, 00:14:19.161 "peer_address": { 00:14:19.161 "trtype": "TCP", 00:14:19.161 "adrfam": "IPv4", 00:14:19.161 "traddr": "10.0.0.1", 00:14:19.161 "trsvcid": "55660" 00:14:19.161 }, 00:14:19.161 "auth": { 00:14:19.161 "state": "completed", 00:14:19.161 "digest": "sha256", 00:14:19.161 "dhgroup": "ffdhe4096" 00:14:19.161 } 00:14:19.161 } 00:14:19.161 ]' 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.161 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.420 03:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.987 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:20.246 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.247 03:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.814 00:14:20.814 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.814 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.814 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.073 { 00:14:21.073 "cntlid": 29, 00:14:21.073 "qid": 0, 00:14:21.073 "state": "enabled", 00:14:21.073 "thread": "nvmf_tgt_poll_group_000", 00:14:21.073 "listen_address": { 00:14:21.073 "trtype": "TCP", 00:14:21.073 "adrfam": "IPv4", 00:14:21.073 "traddr": "10.0.0.2", 00:14:21.073 "trsvcid": "4420" 00:14:21.073 }, 00:14:21.073 "peer_address": { 00:14:21.073 "trtype": "TCP", 00:14:21.073 "adrfam": "IPv4", 00:14:21.073 "traddr": "10.0.0.1", 00:14:21.073 "trsvcid": "48670" 00:14:21.073 }, 00:14:21.073 "auth": { 00:14:21.073 "state": "completed", 00:14:21.073 "digest": "sha256", 00:14:21.073 "dhgroup": "ffdhe4096" 00:14:21.073 } 00:14:21.073 } 00:14:21.073 ]' 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.073 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.333 03:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:21.900 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.900 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.901 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.159 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.160 03:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.160 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.160 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.726 00:14:22.726 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.726 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.727 03:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.727 { 00:14:22.727 "cntlid": 31, 00:14:22.727 "qid": 0, 00:14:22.727 "state": "enabled", 00:14:22.727 "thread": "nvmf_tgt_poll_group_000", 00:14:22.727 "listen_address": { 00:14:22.727 "trtype": "TCP", 00:14:22.727 "adrfam": "IPv4", 00:14:22.727 "traddr": "10.0.0.2", 00:14:22.727 "trsvcid": "4420" 00:14:22.727 }, 00:14:22.727 "peer_address": { 00:14:22.727 "trtype": "TCP", 00:14:22.727 "adrfam": "IPv4", 00:14:22.727 "traddr": "10.0.0.1", 00:14:22.727 "trsvcid": "48706" 00:14:22.727 }, 00:14:22.727 "auth": { 00:14:22.727 "state": "completed", 00:14:22.727 "digest": "sha256", 00:14:22.727 "dhgroup": "ffdhe4096" 00:14:22.727 } 00:14:22.727 } 00:14:22.727 ]' 00:14:22.727 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.986 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.245 03:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:23.812 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.071 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.639 00:14:24.639 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.639 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.639 03:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.957 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.957 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.957 03:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.957 03:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.958 { 00:14:24.958 "cntlid": 33, 00:14:24.958 "qid": 0, 00:14:24.958 "state": "enabled", 00:14:24.958 "thread": "nvmf_tgt_poll_group_000", 00:14:24.958 "listen_address": { 00:14:24.958 "trtype": "TCP", 00:14:24.958 "adrfam": "IPv4", 00:14:24.958 "traddr": "10.0.0.2", 00:14:24.958 "trsvcid": "4420" 00:14:24.958 }, 00:14:24.958 "peer_address": { 00:14:24.958 "trtype": "TCP", 00:14:24.958 "adrfam": "IPv4", 00:14:24.958 "traddr": "10.0.0.1", 00:14:24.958 "trsvcid": "48728" 00:14:24.958 }, 00:14:24.958 "auth": { 00:14:24.958 "state": "completed", 00:14:24.958 "digest": "sha256", 00:14:24.958 "dhgroup": "ffdhe6144" 00:14:24.958 } 00:14:24.958 } 00:14:24.958 ]' 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.958 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.249 03:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:25.817 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.076 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.642 00:14:26.642 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.642 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.642 03:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.642 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.642 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.642 03:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.642 03:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.642 03:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.643 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.643 { 00:14:26.643 "cntlid": 35, 00:14:26.643 "qid": 0, 00:14:26.643 "state": "enabled", 00:14:26.643 "thread": "nvmf_tgt_poll_group_000", 00:14:26.643 "listen_address": { 00:14:26.643 "trtype": "TCP", 00:14:26.643 "adrfam": "IPv4", 00:14:26.643 "traddr": "10.0.0.2", 00:14:26.643 "trsvcid": "4420" 00:14:26.643 }, 00:14:26.643 "peer_address": { 00:14:26.643 "trtype": "TCP", 00:14:26.643 "adrfam": "IPv4", 00:14:26.643 "traddr": "10.0.0.1", 00:14:26.643 "trsvcid": "48760" 00:14:26.643 }, 00:14:26.643 "auth": { 00:14:26.643 "state": "completed", 00:14:26.643 "digest": "sha256", 00:14:26.643 "dhgroup": "ffdhe6144" 00:14:26.643 } 00:14:26.643 } 00:14:26.643 ]' 00:14:26.643 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.901 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.159 03:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:27.727 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.986 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.553 00:14:28.553 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.553 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.553 03:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.812 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.812 { 00:14:28.812 "cntlid": 37, 00:14:28.813 "qid": 0, 00:14:28.813 "state": "enabled", 00:14:28.813 "thread": "nvmf_tgt_poll_group_000", 00:14:28.813 "listen_address": { 00:14:28.813 "trtype": "TCP", 00:14:28.813 "adrfam": "IPv4", 00:14:28.813 "traddr": "10.0.0.2", 00:14:28.813 "trsvcid": "4420" 00:14:28.813 }, 00:14:28.813 "peer_address": { 00:14:28.813 "trtype": "TCP", 00:14:28.813 "adrfam": "IPv4", 00:14:28.813 "traddr": "10.0.0.1", 00:14:28.813 "trsvcid": "48808" 00:14:28.813 }, 00:14:28.813 "auth": { 00:14:28.813 "state": "completed", 00:14:28.813 "digest": "sha256", 00:14:28.813 "dhgroup": "ffdhe6144" 00:14:28.813 } 00:14:28.813 } 00:14:28.813 ]' 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.813 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.071 03:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.639 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.898 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.466 00:14:30.466 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.466 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.466 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.729 { 00:14:30.729 "cntlid": 39, 00:14:30.729 "qid": 0, 00:14:30.729 "state": "enabled", 00:14:30.729 "thread": "nvmf_tgt_poll_group_000", 00:14:30.729 "listen_address": { 00:14:30.729 "trtype": "TCP", 00:14:30.729 "adrfam": "IPv4", 00:14:30.729 "traddr": "10.0.0.2", 00:14:30.729 "trsvcid": "4420" 00:14:30.729 }, 00:14:30.729 "peer_address": { 00:14:30.729 "trtype": "TCP", 00:14:30.729 "adrfam": "IPv4", 00:14:30.729 "traddr": "10.0.0.1", 00:14:30.729 "trsvcid": "32844" 00:14:30.729 }, 00:14:30.729 "auth": { 00:14:30.729 "state": "completed", 00:14:30.729 "digest": "sha256", 00:14:30.729 "dhgroup": "ffdhe6144" 00:14:30.729 } 00:14:30.729 } 00:14:30.729 ]' 00:14:30.729 03:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.729 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.986 03:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.921 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.487 00:14:32.487 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.487 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.487 03:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.746 { 00:14:32.746 "cntlid": 41, 00:14:32.746 "qid": 0, 00:14:32.746 "state": "enabled", 00:14:32.746 "thread": "nvmf_tgt_poll_group_000", 00:14:32.746 "listen_address": { 00:14:32.746 "trtype": "TCP", 00:14:32.746 "adrfam": "IPv4", 00:14:32.746 "traddr": "10.0.0.2", 00:14:32.746 "trsvcid": "4420" 00:14:32.746 }, 00:14:32.746 "peer_address": { 00:14:32.746 "trtype": "TCP", 00:14:32.746 "adrfam": "IPv4", 00:14:32.746 "traddr": "10.0.0.1", 00:14:32.746 "trsvcid": "32878" 00:14:32.746 }, 00:14:32.746 "auth": { 00:14:32.746 "state": "completed", 00:14:32.746 "digest": "sha256", 00:14:32.746 "dhgroup": "ffdhe8192" 00:14:32.746 } 00:14:32.746 } 00:14:32.746 ]' 00:14:32.746 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.004 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.004 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.005 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:33.005 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.005 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.005 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.005 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.263 03:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:33.832 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:34.091 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:34.091 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.091 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.091 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:34.091 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.092 03:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.659 00:14:34.659 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.659 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.659 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.917 { 00:14:34.917 "cntlid": 43, 00:14:34.917 "qid": 0, 00:14:34.917 "state": "enabled", 00:14:34.917 "thread": "nvmf_tgt_poll_group_000", 00:14:34.917 "listen_address": { 00:14:34.917 "trtype": "TCP", 00:14:34.917 "adrfam": "IPv4", 00:14:34.917 "traddr": "10.0.0.2", 00:14:34.917 "trsvcid": "4420" 00:14:34.917 }, 00:14:34.917 "peer_address": { 00:14:34.917 "trtype": "TCP", 00:14:34.917 "adrfam": "IPv4", 00:14:34.917 "traddr": "10.0.0.1", 00:14:34.917 "trsvcid": "32908" 00:14:34.917 }, 00:14:34.917 "auth": { 00:14:34.917 "state": "completed", 00:14:34.917 "digest": "sha256", 00:14:34.917 "dhgroup": "ffdhe8192" 00:14:34.917 } 00:14:34.917 } 00:14:34.917 ]' 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.917 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.175 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.175 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.175 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.175 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.175 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.434 03:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:36.002 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.567 03:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.131 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.131 03:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.387 { 00:14:37.387 "cntlid": 45, 00:14:37.387 "qid": 0, 00:14:37.387 "state": "enabled", 00:14:37.387 "thread": "nvmf_tgt_poll_group_000", 00:14:37.387 "listen_address": { 00:14:37.387 "trtype": "TCP", 00:14:37.387 "adrfam": "IPv4", 00:14:37.387 "traddr": "10.0.0.2", 00:14:37.387 "trsvcid": "4420" 00:14:37.387 }, 00:14:37.387 "peer_address": { 00:14:37.387 "trtype": "TCP", 00:14:37.387 "adrfam": "IPv4", 00:14:37.387 "traddr": "10.0.0.1", 00:14:37.387 "trsvcid": "32928" 00:14:37.387 }, 00:14:37.387 "auth": { 00:14:37.387 "state": "completed", 00:14:37.387 "digest": "sha256", 00:14:37.387 "dhgroup": "ffdhe8192" 00:14:37.387 } 00:14:37.387 } 00:14:37.387 ]' 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.387 03:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.644 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.211 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.469 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.727 03:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.727 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.727 03:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.293 00:14:39.293 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.293 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.293 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.551 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.551 { 00:14:39.551 "cntlid": 47, 00:14:39.551 "qid": 0, 00:14:39.552 "state": "enabled", 00:14:39.552 "thread": "nvmf_tgt_poll_group_000", 00:14:39.552 "listen_address": { 00:14:39.552 "trtype": "TCP", 00:14:39.552 "adrfam": "IPv4", 00:14:39.552 "traddr": "10.0.0.2", 00:14:39.552 "trsvcid": "4420" 00:14:39.552 }, 00:14:39.552 "peer_address": { 00:14:39.552 "trtype": "TCP", 00:14:39.552 "adrfam": "IPv4", 00:14:39.552 "traddr": "10.0.0.1", 00:14:39.552 "trsvcid": "32960" 00:14:39.552 }, 00:14:39.552 "auth": { 00:14:39.552 "state": "completed", 00:14:39.552 "digest": "sha256", 00:14:39.552 "dhgroup": "ffdhe8192" 00:14:39.552 } 00:14:39.552 } 00:14:39.552 ]' 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.552 03:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.810 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.377 03:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.636 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.894 00:14:40.894 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.894 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.894 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.153 { 00:14:41.153 "cntlid": 49, 00:14:41.153 "qid": 0, 00:14:41.153 "state": "enabled", 00:14:41.153 "thread": "nvmf_tgt_poll_group_000", 00:14:41.153 "listen_address": { 00:14:41.153 "trtype": "TCP", 00:14:41.153 "adrfam": "IPv4", 00:14:41.153 "traddr": "10.0.0.2", 00:14:41.153 "trsvcid": "4420" 00:14:41.153 }, 00:14:41.153 "peer_address": { 00:14:41.153 "trtype": "TCP", 00:14:41.153 "adrfam": "IPv4", 00:14:41.153 "traddr": "10.0.0.1", 00:14:41.153 "trsvcid": "48610" 00:14:41.153 }, 00:14:41.153 "auth": { 00:14:41.153 "state": "completed", 00:14:41.153 "digest": "sha384", 00:14:41.153 "dhgroup": "null" 00:14:41.153 } 00:14:41.153 } 00:14:41.153 ]' 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.153 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.411 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.411 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.411 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.411 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.411 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.669 03:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:42.236 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.494 03:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.752 00:14:42.752 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.752 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.752 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.010 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.010 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.011 03:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.011 03:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.011 03:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.011 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.011 { 00:14:43.011 "cntlid": 51, 00:14:43.011 "qid": 0, 00:14:43.011 "state": "enabled", 00:14:43.011 "thread": "nvmf_tgt_poll_group_000", 00:14:43.011 "listen_address": { 00:14:43.011 "trtype": "TCP", 00:14:43.011 "adrfam": "IPv4", 00:14:43.011 "traddr": "10.0.0.2", 00:14:43.011 "trsvcid": "4420" 00:14:43.011 }, 00:14:43.011 "peer_address": { 00:14:43.011 "trtype": "TCP", 00:14:43.011 "adrfam": "IPv4", 00:14:43.011 "traddr": "10.0.0.1", 00:14:43.011 "trsvcid": "48638" 00:14:43.011 }, 00:14:43.011 "auth": { 00:14:43.011 "state": "completed", 00:14:43.011 "digest": "sha384", 00:14:43.011 "dhgroup": "null" 00:14:43.011 } 00:14:43.011 } 00:14:43.011 ]' 00:14:43.011 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.269 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.527 03:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:44.095 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.095 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:44.095 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.095 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.096 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.096 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.096 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.096 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.369 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.639 00:14:44.639 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.639 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.639 03:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.898 { 00:14:44.898 "cntlid": 53, 00:14:44.898 "qid": 0, 00:14:44.898 "state": "enabled", 00:14:44.898 "thread": "nvmf_tgt_poll_group_000", 00:14:44.898 "listen_address": { 00:14:44.898 "trtype": "TCP", 00:14:44.898 "adrfam": "IPv4", 00:14:44.898 "traddr": "10.0.0.2", 00:14:44.898 "trsvcid": "4420" 00:14:44.898 }, 00:14:44.898 "peer_address": { 00:14:44.898 "trtype": "TCP", 00:14:44.898 "adrfam": "IPv4", 00:14:44.898 "traddr": "10.0.0.1", 00:14:44.898 "trsvcid": "48664" 00:14:44.898 }, 00:14:44.898 "auth": { 00:14:44.898 "state": "completed", 00:14:44.898 "digest": "sha384", 00:14:44.898 "dhgroup": "null" 00:14:44.898 } 00:14:44.898 } 00:14:44.898 ]' 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:44.898 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.157 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.157 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.157 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.157 03:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.093 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.094 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.352 00:14:46.352 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.352 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.352 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.610 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.610 03:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.610 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.610 03:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.610 03:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.610 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.610 { 00:14:46.610 "cntlid": 55, 00:14:46.610 "qid": 0, 00:14:46.610 "state": "enabled", 00:14:46.610 "thread": "nvmf_tgt_poll_group_000", 00:14:46.610 "listen_address": { 00:14:46.610 "trtype": "TCP", 00:14:46.610 "adrfam": "IPv4", 00:14:46.610 "traddr": "10.0.0.2", 00:14:46.610 "trsvcid": "4420" 00:14:46.610 }, 00:14:46.610 "peer_address": { 00:14:46.610 "trtype": "TCP", 00:14:46.610 "adrfam": "IPv4", 00:14:46.610 "traddr": "10.0.0.1", 00:14:46.610 "trsvcid": "48684" 00:14:46.610 }, 00:14:46.610 "auth": { 00:14:46.610 "state": "completed", 00:14:46.610 "digest": "sha384", 00:14:46.610 "dhgroup": "null" 00:14:46.610 } 00:14:46.610 } 00:14:46.610 ]' 00:14:46.610 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.610 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.610 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.868 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:46.868 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.868 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.868 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.868 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.126 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:47.693 03:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.951 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.952 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.952 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.952 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.210 00:14:48.210 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.210 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.210 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.469 { 00:14:48.469 "cntlid": 57, 00:14:48.469 "qid": 0, 00:14:48.469 "state": "enabled", 00:14:48.469 "thread": "nvmf_tgt_poll_group_000", 00:14:48.469 "listen_address": { 00:14:48.469 "trtype": "TCP", 00:14:48.469 "adrfam": "IPv4", 00:14:48.469 "traddr": "10.0.0.2", 00:14:48.469 "trsvcid": "4420" 00:14:48.469 }, 00:14:48.469 "peer_address": { 00:14:48.469 "trtype": "TCP", 00:14:48.469 "adrfam": "IPv4", 00:14:48.469 "traddr": "10.0.0.1", 00:14:48.469 "trsvcid": "48722" 00:14:48.469 }, 00:14:48.469 "auth": { 00:14:48.469 "state": "completed", 00:14:48.469 "digest": "sha384", 00:14:48.469 "dhgroup": "ffdhe2048" 00:14:48.469 } 00:14:48.469 } 00:14:48.469 ]' 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.469 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.728 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.728 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.728 03:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.987 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:49.556 03:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.815 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.074 00:14:50.074 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.074 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.074 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.334 { 00:14:50.334 "cntlid": 59, 00:14:50.334 "qid": 0, 00:14:50.334 "state": "enabled", 00:14:50.334 "thread": "nvmf_tgt_poll_group_000", 00:14:50.334 "listen_address": { 00:14:50.334 "trtype": "TCP", 00:14:50.334 "adrfam": "IPv4", 00:14:50.334 "traddr": "10.0.0.2", 00:14:50.334 "trsvcid": "4420" 00:14:50.334 }, 00:14:50.334 "peer_address": { 00:14:50.334 "trtype": "TCP", 00:14:50.334 "adrfam": "IPv4", 00:14:50.334 "traddr": "10.0.0.1", 00:14:50.334 "trsvcid": "40190" 00:14:50.334 }, 00:14:50.334 "auth": { 00:14:50.334 "state": "completed", 00:14:50.334 "digest": "sha384", 00:14:50.334 "dhgroup": "ffdhe2048" 00:14:50.334 } 00:14:50.334 } 00:14:50.334 ]' 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.334 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.593 03:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:51.162 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.422 03:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.423 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.423 03:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.682 00:14:51.682 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.682 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.682 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.941 { 00:14:51.941 "cntlid": 61, 00:14:51.941 "qid": 0, 00:14:51.941 "state": "enabled", 00:14:51.941 "thread": "nvmf_tgt_poll_group_000", 00:14:51.941 "listen_address": { 00:14:51.941 "trtype": "TCP", 00:14:51.941 "adrfam": "IPv4", 00:14:51.941 "traddr": "10.0.0.2", 00:14:51.941 "trsvcid": "4420" 00:14:51.941 }, 00:14:51.941 "peer_address": { 00:14:51.941 "trtype": "TCP", 00:14:51.941 "adrfam": "IPv4", 00:14:51.941 "traddr": "10.0.0.1", 00:14:51.941 "trsvcid": "40232" 00:14:51.941 }, 00:14:51.941 "auth": { 00:14:51.941 "state": "completed", 00:14:51.941 "digest": "sha384", 00:14:51.941 "dhgroup": "ffdhe2048" 00:14:51.941 } 00:14:51.941 } 00:14:51.941 ]' 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.941 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.199 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.199 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.199 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.458 03:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.025 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.284 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.543 00:14:53.543 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.543 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.543 03:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.803 { 00:14:53.803 "cntlid": 63, 00:14:53.803 "qid": 0, 00:14:53.803 "state": "enabled", 00:14:53.803 "thread": "nvmf_tgt_poll_group_000", 00:14:53.803 "listen_address": { 00:14:53.803 "trtype": "TCP", 00:14:53.803 "adrfam": "IPv4", 00:14:53.803 "traddr": "10.0.0.2", 00:14:53.803 "trsvcid": "4420" 00:14:53.803 }, 00:14:53.803 "peer_address": { 00:14:53.803 "trtype": "TCP", 00:14:53.803 "adrfam": "IPv4", 00:14:53.803 "traddr": "10.0.0.1", 00:14:53.803 "trsvcid": "40260" 00:14:53.803 }, 00:14:53.803 "auth": { 00:14:53.803 "state": "completed", 00:14:53.803 "digest": "sha384", 00:14:53.803 "dhgroup": "ffdhe2048" 00:14:53.803 } 00:14:53.803 } 00:14:53.803 ]' 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.803 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.062 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.062 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.062 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.062 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.062 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.321 03:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:14:54.889 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.889 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:54.890 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.148 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.716 00:14:55.716 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.716 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.716 03:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.976 { 00:14:55.976 "cntlid": 65, 00:14:55.976 "qid": 0, 00:14:55.976 "state": "enabled", 00:14:55.976 "thread": "nvmf_tgt_poll_group_000", 00:14:55.976 "listen_address": { 00:14:55.976 "trtype": "TCP", 00:14:55.976 "adrfam": "IPv4", 00:14:55.976 "traddr": "10.0.0.2", 00:14:55.976 "trsvcid": "4420" 00:14:55.976 }, 00:14:55.976 "peer_address": { 00:14:55.976 "trtype": "TCP", 00:14:55.976 "adrfam": "IPv4", 00:14:55.976 "traddr": "10.0.0.1", 00:14:55.976 "trsvcid": "40280" 00:14:55.976 }, 00:14:55.976 "auth": { 00:14:55.976 "state": "completed", 00:14:55.976 "digest": "sha384", 00:14:55.976 "dhgroup": "ffdhe3072" 00:14:55.976 } 00:14:55.976 } 00:14:55.976 ]' 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.976 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.235 03:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:14:56.807 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.065 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.323 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.581 00:14:57.581 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.581 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.581 03:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.840 { 00:14:57.840 "cntlid": 67, 00:14:57.840 "qid": 0, 00:14:57.840 "state": "enabled", 00:14:57.840 "thread": "nvmf_tgt_poll_group_000", 00:14:57.840 "listen_address": { 00:14:57.840 "trtype": "TCP", 00:14:57.840 "adrfam": "IPv4", 00:14:57.840 "traddr": "10.0.0.2", 00:14:57.840 "trsvcid": "4420" 00:14:57.840 }, 00:14:57.840 "peer_address": { 00:14:57.840 "trtype": "TCP", 00:14:57.840 "adrfam": "IPv4", 00:14:57.840 "traddr": "10.0.0.1", 00:14:57.840 "trsvcid": "40300" 00:14:57.840 }, 00:14:57.840 "auth": { 00:14:57.840 "state": "completed", 00:14:57.840 "digest": "sha384", 00:14:57.840 "dhgroup": "ffdhe3072" 00:14:57.840 } 00:14:57.840 } 00:14:57.840 ]' 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.840 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.098 03:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:14:58.663 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.663 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:14:58.663 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.663 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.923 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.490 00:14:59.490 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.490 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.490 03:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.749 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.749 { 00:14:59.749 "cntlid": 69, 00:14:59.749 "qid": 0, 00:14:59.749 "state": "enabled", 00:14:59.749 "thread": "nvmf_tgt_poll_group_000", 00:14:59.749 "listen_address": { 00:14:59.749 "trtype": "TCP", 00:14:59.749 "adrfam": "IPv4", 00:14:59.749 "traddr": "10.0.0.2", 00:14:59.749 "trsvcid": "4420" 00:14:59.749 }, 00:14:59.749 "peer_address": { 00:14:59.750 "trtype": "TCP", 00:14:59.750 "adrfam": "IPv4", 00:14:59.750 "traddr": "10.0.0.1", 00:14:59.750 "trsvcid": "56912" 00:14:59.750 }, 00:14:59.750 "auth": { 00:14:59.750 "state": "completed", 00:14:59.750 "digest": "sha384", 00:14:59.750 "dhgroup": "ffdhe3072" 00:14:59.750 } 00:14:59.750 } 00:14:59.750 ]' 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.750 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.008 03:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.944 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.542 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.542 { 00:15:01.542 "cntlid": 71, 00:15:01.542 "qid": 0, 00:15:01.542 "state": "enabled", 00:15:01.542 "thread": "nvmf_tgt_poll_group_000", 00:15:01.542 "listen_address": { 00:15:01.542 "trtype": "TCP", 00:15:01.542 "adrfam": "IPv4", 00:15:01.542 "traddr": "10.0.0.2", 00:15:01.542 "trsvcid": "4420" 00:15:01.542 }, 00:15:01.542 "peer_address": { 00:15:01.542 "trtype": "TCP", 00:15:01.542 "adrfam": "IPv4", 00:15:01.542 "traddr": "10.0.0.1", 00:15:01.542 "trsvcid": "56960" 00:15:01.542 }, 00:15:01.542 "auth": { 00:15:01.542 "state": "completed", 00:15:01.542 "digest": "sha384", 00:15:01.542 "dhgroup": "ffdhe3072" 00:15:01.542 } 00:15:01.542 } 00:15:01.542 ]' 00:15:01.542 03:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.542 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.542 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.807 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:01.807 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.807 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.807 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.807 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.066 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:02.633 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.633 03:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:02.633 03:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.633 03:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.633 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.633 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.633 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.633 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:02.633 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.892 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.150 00:15:03.409 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.409 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.409 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.409 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.667 { 00:15:03.667 "cntlid": 73, 00:15:03.667 "qid": 0, 00:15:03.667 "state": "enabled", 00:15:03.667 "thread": "nvmf_tgt_poll_group_000", 00:15:03.667 "listen_address": { 00:15:03.667 "trtype": "TCP", 00:15:03.667 "adrfam": "IPv4", 00:15:03.667 "traddr": "10.0.0.2", 00:15:03.667 "trsvcid": "4420" 00:15:03.667 }, 00:15:03.667 "peer_address": { 00:15:03.667 "trtype": "TCP", 00:15:03.667 "adrfam": "IPv4", 00:15:03.667 "traddr": "10.0.0.1", 00:15:03.667 "trsvcid": "56996" 00:15:03.667 }, 00:15:03.667 "auth": { 00:15:03.667 "state": "completed", 00:15:03.667 "digest": "sha384", 00:15:03.667 "dhgroup": "ffdhe4096" 00:15:03.667 } 00:15:03.667 } 00:15:03.667 ]' 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.667 03:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.667 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.667 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.667 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.667 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.667 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.926 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.493 03:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.751 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.010 00:15:05.269 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.269 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.269 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.528 { 00:15:05.528 "cntlid": 75, 00:15:05.528 "qid": 0, 00:15:05.528 "state": "enabled", 00:15:05.528 "thread": "nvmf_tgt_poll_group_000", 00:15:05.528 "listen_address": { 00:15:05.528 "trtype": "TCP", 00:15:05.528 "adrfam": "IPv4", 00:15:05.528 "traddr": "10.0.0.2", 00:15:05.528 "trsvcid": "4420" 00:15:05.528 }, 00:15:05.528 "peer_address": { 00:15:05.528 "trtype": "TCP", 00:15:05.528 "adrfam": "IPv4", 00:15:05.528 "traddr": "10.0.0.1", 00:15:05.528 "trsvcid": "57016" 00:15:05.528 }, 00:15:05.528 "auth": { 00:15:05.528 "state": "completed", 00:15:05.528 "digest": "sha384", 00:15:05.528 "dhgroup": "ffdhe4096" 00:15:05.528 } 00:15:05.528 } 00:15:05.528 ]' 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.528 03:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.786 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.353 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.611 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:06.611 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.611 03:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.611 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.868 00:15:06.868 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.868 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.868 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.435 { 00:15:07.435 "cntlid": 77, 00:15:07.435 "qid": 0, 00:15:07.435 "state": "enabled", 00:15:07.435 "thread": "nvmf_tgt_poll_group_000", 00:15:07.435 "listen_address": { 00:15:07.435 "trtype": "TCP", 00:15:07.435 "adrfam": "IPv4", 00:15:07.435 "traddr": "10.0.0.2", 00:15:07.435 "trsvcid": "4420" 00:15:07.435 }, 00:15:07.435 "peer_address": { 00:15:07.435 "trtype": "TCP", 00:15:07.435 "adrfam": "IPv4", 00:15:07.435 "traddr": "10.0.0.1", 00:15:07.435 "trsvcid": "57036" 00:15:07.435 }, 00:15:07.435 "auth": { 00:15:07.435 "state": "completed", 00:15:07.435 "digest": "sha384", 00:15:07.435 "dhgroup": "ffdhe4096" 00:15:07.435 } 00:15:07.435 } 00:15:07.435 ]' 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.435 03:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.693 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.260 03:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.829 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.088 00:15:09.088 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.088 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.088 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.347 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.347 { 00:15:09.347 "cntlid": 79, 00:15:09.347 "qid": 0, 00:15:09.347 "state": "enabled", 00:15:09.347 "thread": "nvmf_tgt_poll_group_000", 00:15:09.347 "listen_address": { 00:15:09.347 "trtype": "TCP", 00:15:09.347 "adrfam": "IPv4", 00:15:09.347 "traddr": "10.0.0.2", 00:15:09.347 "trsvcid": "4420" 00:15:09.347 }, 00:15:09.347 "peer_address": { 00:15:09.347 "trtype": "TCP", 00:15:09.347 "adrfam": "IPv4", 00:15:09.347 "traddr": "10.0.0.1", 00:15:09.347 "trsvcid": "57058" 00:15:09.348 }, 00:15:09.348 "auth": { 00:15:09.348 "state": "completed", 00:15:09.348 "digest": "sha384", 00:15:09.348 "dhgroup": "ffdhe4096" 00:15:09.348 } 00:15:09.348 } 00:15:09.348 ]' 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.348 03:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.607 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.175 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.435 03:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.003 00:15:11.003 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.003 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.003 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.262 { 00:15:11.262 "cntlid": 81, 00:15:11.262 "qid": 0, 00:15:11.262 "state": "enabled", 00:15:11.262 "thread": "nvmf_tgt_poll_group_000", 00:15:11.262 "listen_address": { 00:15:11.262 "trtype": "TCP", 00:15:11.262 "adrfam": "IPv4", 00:15:11.262 "traddr": "10.0.0.2", 00:15:11.262 "trsvcid": "4420" 00:15:11.262 }, 00:15:11.262 "peer_address": { 00:15:11.262 "trtype": "TCP", 00:15:11.262 "adrfam": "IPv4", 00:15:11.262 "traddr": "10.0.0.1", 00:15:11.262 "trsvcid": "36334" 00:15:11.262 }, 00:15:11.262 "auth": { 00:15:11.262 "state": "completed", 00:15:11.262 "digest": "sha384", 00:15:11.262 "dhgroup": "ffdhe6144" 00:15:11.262 } 00:15:11.262 } 00:15:11.262 ]' 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.262 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.521 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.521 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.521 03:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.521 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.458 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.459 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.459 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.459 03:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.459 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.459 03:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.026 00:15:13.026 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.026 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.026 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.284 { 00:15:13.284 "cntlid": 83, 00:15:13.284 "qid": 0, 00:15:13.284 "state": "enabled", 00:15:13.284 "thread": "nvmf_tgt_poll_group_000", 00:15:13.284 "listen_address": { 00:15:13.284 "trtype": "TCP", 00:15:13.284 "adrfam": "IPv4", 00:15:13.284 "traddr": "10.0.0.2", 00:15:13.284 "trsvcid": "4420" 00:15:13.284 }, 00:15:13.284 "peer_address": { 00:15:13.284 "trtype": "TCP", 00:15:13.284 "adrfam": "IPv4", 00:15:13.284 "traddr": "10.0.0.1", 00:15:13.284 "trsvcid": "36366" 00:15:13.284 }, 00:15:13.284 "auth": { 00:15:13.284 "state": "completed", 00:15:13.284 "digest": "sha384", 00:15:13.284 "dhgroup": "ffdhe6144" 00:15:13.284 } 00:15:13.284 } 00:15:13.284 ]' 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.284 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.542 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.542 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.542 03:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.801 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.370 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.630 03:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.630 03:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.630 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.630 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.198 00:15:15.198 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.198 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.198 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.457 { 00:15:15.457 "cntlid": 85, 00:15:15.457 "qid": 0, 00:15:15.457 "state": "enabled", 00:15:15.457 "thread": "nvmf_tgt_poll_group_000", 00:15:15.457 "listen_address": { 00:15:15.457 "trtype": "TCP", 00:15:15.457 "adrfam": "IPv4", 00:15:15.457 "traddr": "10.0.0.2", 00:15:15.457 "trsvcid": "4420" 00:15:15.457 }, 00:15:15.457 "peer_address": { 00:15:15.457 "trtype": "TCP", 00:15:15.457 "adrfam": "IPv4", 00:15:15.457 "traddr": "10.0.0.1", 00:15:15.457 "trsvcid": "36396" 00:15:15.457 }, 00:15:15.457 "auth": { 00:15:15.457 "state": "completed", 00:15:15.457 "digest": "sha384", 00:15:15.457 "dhgroup": "ffdhe6144" 00:15:15.457 } 00:15:15.457 } 00:15:15.457 ]' 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.457 03:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.717 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:16.285 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.548 03:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.810 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.068 00:15:17.068 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.068 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.068 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.327 { 00:15:17.327 "cntlid": 87, 00:15:17.327 "qid": 0, 00:15:17.327 "state": "enabled", 00:15:17.327 "thread": "nvmf_tgt_poll_group_000", 00:15:17.327 "listen_address": { 00:15:17.327 "trtype": "TCP", 00:15:17.327 "adrfam": "IPv4", 00:15:17.327 "traddr": "10.0.0.2", 00:15:17.327 "trsvcid": "4420" 00:15:17.327 }, 00:15:17.327 "peer_address": { 00:15:17.327 "trtype": "TCP", 00:15:17.327 "adrfam": "IPv4", 00:15:17.327 "traddr": "10.0.0.1", 00:15:17.327 "trsvcid": "36430" 00:15:17.327 }, 00:15:17.327 "auth": { 00:15:17.327 "state": "completed", 00:15:17.327 "digest": "sha384", 00:15:17.327 "dhgroup": "ffdhe6144" 00:15:17.327 } 00:15:17.327 } 00:15:17.327 ]' 00:15:17.327 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.586 03:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.845 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.416 03:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.704 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:18.704 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.704 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.704 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:18.704 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.705 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.281 00:15:19.281 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.281 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.281 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.540 { 00:15:19.540 "cntlid": 89, 00:15:19.540 "qid": 0, 00:15:19.540 "state": "enabled", 00:15:19.540 "thread": "nvmf_tgt_poll_group_000", 00:15:19.540 "listen_address": { 00:15:19.540 "trtype": "TCP", 00:15:19.540 "adrfam": "IPv4", 00:15:19.540 "traddr": "10.0.0.2", 00:15:19.540 "trsvcid": "4420" 00:15:19.540 }, 00:15:19.540 "peer_address": { 00:15:19.540 "trtype": "TCP", 00:15:19.540 "adrfam": "IPv4", 00:15:19.540 "traddr": "10.0.0.1", 00:15:19.540 "trsvcid": "36460" 00:15:19.540 }, 00:15:19.540 "auth": { 00:15:19.540 "state": "completed", 00:15:19.540 "digest": "sha384", 00:15:19.540 "dhgroup": "ffdhe8192" 00:15:19.540 } 00:15:19.540 } 00:15:19.540 ]' 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.540 03:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.799 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.799 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.799 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.799 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.735 03:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.735 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:20.735 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.735 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.736 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.304 00:15:21.304 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.304 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.304 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.563 { 00:15:21.563 "cntlid": 91, 00:15:21.563 "qid": 0, 00:15:21.563 "state": "enabled", 00:15:21.563 "thread": "nvmf_tgt_poll_group_000", 00:15:21.563 "listen_address": { 00:15:21.563 "trtype": "TCP", 00:15:21.563 "adrfam": "IPv4", 00:15:21.563 "traddr": "10.0.0.2", 00:15:21.563 "trsvcid": "4420" 00:15:21.563 }, 00:15:21.563 "peer_address": { 00:15:21.563 "trtype": "TCP", 00:15:21.563 "adrfam": "IPv4", 00:15:21.563 "traddr": "10.0.0.1", 00:15:21.563 "trsvcid": "33828" 00:15:21.563 }, 00:15:21.563 "auth": { 00:15:21.563 "state": "completed", 00:15:21.563 "digest": "sha384", 00:15:21.563 "dhgroup": "ffdhe8192" 00:15:21.563 } 00:15:21.563 } 00:15:21.563 ]' 00:15:21.563 03:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.563 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.563 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.823 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.823 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.823 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.823 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.823 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.082 03:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.649 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.908 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 03:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.909 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.909 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.841 00:15:23.841 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.841 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.841 03:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.841 { 00:15:23.841 "cntlid": 93, 00:15:23.841 "qid": 0, 00:15:23.841 "state": "enabled", 00:15:23.841 "thread": "nvmf_tgt_poll_group_000", 00:15:23.841 "listen_address": { 00:15:23.841 "trtype": "TCP", 00:15:23.841 "adrfam": "IPv4", 00:15:23.841 "traddr": "10.0.0.2", 00:15:23.841 "trsvcid": "4420" 00:15:23.841 }, 00:15:23.841 "peer_address": { 00:15:23.841 "trtype": "TCP", 00:15:23.841 "adrfam": "IPv4", 00:15:23.841 "traddr": "10.0.0.1", 00:15:23.841 "trsvcid": "33870" 00:15:23.841 }, 00:15:23.841 "auth": { 00:15:23.841 "state": "completed", 00:15:23.841 "digest": "sha384", 00:15:23.841 "dhgroup": "ffdhe8192" 00:15:23.841 } 00:15:23.841 } 00:15:23.841 ]' 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.841 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.098 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.098 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.098 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.098 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.098 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.356 03:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:24.920 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.178 03:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.112 00:15:26.112 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.112 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.113 { 00:15:26.113 "cntlid": 95, 00:15:26.113 "qid": 0, 00:15:26.113 "state": "enabled", 00:15:26.113 "thread": "nvmf_tgt_poll_group_000", 00:15:26.113 "listen_address": { 00:15:26.113 "trtype": "TCP", 00:15:26.113 "adrfam": "IPv4", 00:15:26.113 "traddr": "10.0.0.2", 00:15:26.113 "trsvcid": "4420" 00:15:26.113 }, 00:15:26.113 "peer_address": { 00:15:26.113 "trtype": "TCP", 00:15:26.113 "adrfam": "IPv4", 00:15:26.113 "traddr": "10.0.0.1", 00:15:26.113 "trsvcid": "33888" 00:15:26.113 }, 00:15:26.113 "auth": { 00:15:26.113 "state": "completed", 00:15:26.113 "digest": "sha384", 00:15:26.113 "dhgroup": "ffdhe8192" 00:15:26.113 } 00:15:26.113 } 00:15:26.113 ]' 00:15:26.113 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.371 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.628 03:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:27.193 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.193 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:27.193 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.193 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.451 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.709 03:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.967 00:15:27.967 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.967 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.967 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.225 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.225 { 00:15:28.225 "cntlid": 97, 00:15:28.225 "qid": 0, 00:15:28.225 "state": "enabled", 00:15:28.225 "thread": "nvmf_tgt_poll_group_000", 00:15:28.225 "listen_address": { 00:15:28.225 "trtype": "TCP", 00:15:28.225 "adrfam": "IPv4", 00:15:28.225 "traddr": "10.0.0.2", 00:15:28.225 "trsvcid": "4420" 00:15:28.225 }, 00:15:28.225 "peer_address": { 00:15:28.225 "trtype": "TCP", 00:15:28.225 "adrfam": "IPv4", 00:15:28.225 "traddr": "10.0.0.1", 00:15:28.226 "trsvcid": "33918" 00:15:28.226 }, 00:15:28.226 "auth": { 00:15:28.226 "state": "completed", 00:15:28.226 "digest": "sha512", 00:15:28.226 "dhgroup": "null" 00:15:28.226 } 00:15:28.226 } 00:15:28.226 ]' 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.226 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.484 03:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.425 03:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.992 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.992 { 00:15:29.992 "cntlid": 99, 00:15:29.992 "qid": 0, 00:15:29.992 "state": "enabled", 00:15:29.992 "thread": "nvmf_tgt_poll_group_000", 00:15:29.992 "listen_address": { 00:15:29.992 "trtype": "TCP", 00:15:29.992 "adrfam": "IPv4", 00:15:29.992 "traddr": "10.0.0.2", 00:15:29.992 "trsvcid": "4420" 00:15:29.992 }, 00:15:29.992 "peer_address": { 00:15:29.992 "trtype": "TCP", 00:15:29.992 "adrfam": "IPv4", 00:15:29.992 "traddr": "10.0.0.1", 00:15:29.992 "trsvcid": "37536" 00:15:29.992 }, 00:15:29.992 "auth": { 00:15:29.992 "state": "completed", 00:15:29.992 "digest": "sha512", 00:15:29.992 "dhgroup": "null" 00:15:29.992 } 00:15:29.992 } 00:15:29.992 ]' 00:15:29.992 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.251 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.510 03:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:31.077 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.336 03:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.904 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.904 { 00:15:31.904 "cntlid": 101, 00:15:31.904 "qid": 0, 00:15:31.904 "state": "enabled", 00:15:31.904 "thread": "nvmf_tgt_poll_group_000", 00:15:31.904 "listen_address": { 00:15:31.904 "trtype": "TCP", 00:15:31.904 "adrfam": "IPv4", 00:15:31.904 "traddr": "10.0.0.2", 00:15:31.904 "trsvcid": "4420" 00:15:31.904 }, 00:15:31.904 "peer_address": { 00:15:31.904 "trtype": "TCP", 00:15:31.904 "adrfam": "IPv4", 00:15:31.904 "traddr": "10.0.0.1", 00:15:31.904 "trsvcid": "37552" 00:15:31.904 }, 00:15:31.904 "auth": { 00:15:31.904 "state": "completed", 00:15:31.904 "digest": "sha512", 00:15:31.904 "dhgroup": "null" 00:15:31.904 } 00:15:31.904 } 00:15:31.904 ]' 00:15:31.904 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.163 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.421 03:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.357 03:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.616 00:15:33.616 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.616 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.616 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.875 { 00:15:33.875 "cntlid": 103, 00:15:33.875 "qid": 0, 00:15:33.875 "state": "enabled", 00:15:33.875 "thread": "nvmf_tgt_poll_group_000", 00:15:33.875 "listen_address": { 00:15:33.875 "trtype": "TCP", 00:15:33.875 "adrfam": "IPv4", 00:15:33.875 "traddr": "10.0.0.2", 00:15:33.875 "trsvcid": "4420" 00:15:33.875 }, 00:15:33.875 "peer_address": { 00:15:33.875 "trtype": "TCP", 00:15:33.875 "adrfam": "IPv4", 00:15:33.875 "traddr": "10.0.0.1", 00:15:33.875 "trsvcid": "37582" 00:15:33.875 }, 00:15:33.875 "auth": { 00:15:33.875 "state": "completed", 00:15:33.875 "digest": "sha512", 00:15:33.875 "dhgroup": "null" 00:15:33.875 } 00:15:33.875 } 00:15:33.875 ]' 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.875 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.133 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.133 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.133 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.133 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.133 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.392 03:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.959 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.218 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.477 00:15:35.477 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.477 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.477 03:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.735 { 00:15:35.735 "cntlid": 105, 00:15:35.735 "qid": 0, 00:15:35.735 "state": "enabled", 00:15:35.735 "thread": "nvmf_tgt_poll_group_000", 00:15:35.735 "listen_address": { 00:15:35.735 "trtype": "TCP", 00:15:35.735 "adrfam": "IPv4", 00:15:35.735 "traddr": "10.0.0.2", 00:15:35.735 "trsvcid": "4420" 00:15:35.735 }, 00:15:35.735 "peer_address": { 00:15:35.735 "trtype": "TCP", 00:15:35.735 "adrfam": "IPv4", 00:15:35.735 "traddr": "10.0.0.1", 00:15:35.735 "trsvcid": "37592" 00:15:35.735 }, 00:15:35.735 "auth": { 00:15:35.735 "state": "completed", 00:15:35.735 "digest": "sha512", 00:15:35.735 "dhgroup": "ffdhe2048" 00:15:35.735 } 00:15:35.735 } 00:15:35.735 ]' 00:15:35.735 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.994 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.994 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.995 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.995 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.995 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.995 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.995 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.254 03:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:36.820 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.820 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:36.820 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.820 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.078 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.078 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.078 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.078 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.346 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.640 00:15:37.640 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.640 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.640 03:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.903 { 00:15:37.903 "cntlid": 107, 00:15:37.903 "qid": 0, 00:15:37.903 "state": "enabled", 00:15:37.903 "thread": "nvmf_tgt_poll_group_000", 00:15:37.903 "listen_address": { 00:15:37.903 "trtype": "TCP", 00:15:37.903 "adrfam": "IPv4", 00:15:37.903 "traddr": "10.0.0.2", 00:15:37.903 "trsvcid": "4420" 00:15:37.903 }, 00:15:37.903 "peer_address": { 00:15:37.903 "trtype": "TCP", 00:15:37.903 "adrfam": "IPv4", 00:15:37.903 "traddr": "10.0.0.1", 00:15:37.903 "trsvcid": "37616" 00:15:37.903 }, 00:15:37.903 "auth": { 00:15:37.903 "state": "completed", 00:15:37.903 "digest": "sha512", 00:15:37.903 "dhgroup": "ffdhe2048" 00:15:37.903 } 00:15:37.903 } 00:15:37.903 ]' 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.903 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.162 03:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:39.098 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.098 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:39.098 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.098 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.098 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.099 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.357 00:15:39.357 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.357 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.357 03:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.616 { 00:15:39.616 "cntlid": 109, 00:15:39.616 "qid": 0, 00:15:39.616 "state": "enabled", 00:15:39.616 "thread": "nvmf_tgt_poll_group_000", 00:15:39.616 "listen_address": { 00:15:39.616 "trtype": "TCP", 00:15:39.616 "adrfam": "IPv4", 00:15:39.616 "traddr": "10.0.0.2", 00:15:39.616 "trsvcid": "4420" 00:15:39.616 }, 00:15:39.616 "peer_address": { 00:15:39.616 "trtype": "TCP", 00:15:39.616 "adrfam": "IPv4", 00:15:39.616 "traddr": "10.0.0.1", 00:15:39.616 "trsvcid": "46654" 00:15:39.616 }, 00:15:39.616 "auth": { 00:15:39.616 "state": "completed", 00:15:39.616 "digest": "sha512", 00:15:39.616 "dhgroup": "ffdhe2048" 00:15:39.616 } 00:15:39.616 } 00:15:39.616 ]' 00:15:39.616 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.874 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.875 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.134 03:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.701 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.960 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.218 00:15:41.218 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.218 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.218 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.477 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.477 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.477 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.477 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.736 03:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.736 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.736 { 00:15:41.736 "cntlid": 111, 00:15:41.736 "qid": 0, 00:15:41.736 "state": "enabled", 00:15:41.736 "thread": "nvmf_tgt_poll_group_000", 00:15:41.736 "listen_address": { 00:15:41.736 "trtype": "TCP", 00:15:41.736 "adrfam": "IPv4", 00:15:41.736 "traddr": "10.0.0.2", 00:15:41.736 "trsvcid": "4420" 00:15:41.736 }, 00:15:41.736 "peer_address": { 00:15:41.736 "trtype": "TCP", 00:15:41.736 "adrfam": "IPv4", 00:15:41.736 "traddr": "10.0.0.1", 00:15:41.736 "trsvcid": "46666" 00:15:41.736 }, 00:15:41.736 "auth": { 00:15:41.736 "state": "completed", 00:15:41.736 "digest": "sha512", 00:15:41.736 "dhgroup": "ffdhe2048" 00:15:41.736 } 00:15:41.736 } 00:15:41.736 ]' 00:15:41.736 03:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.736 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.995 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:42.563 03:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.563 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.822 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.082 00:15:43.082 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.082 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.082 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.650 { 00:15:43.650 "cntlid": 113, 00:15:43.650 "qid": 0, 00:15:43.650 "state": "enabled", 00:15:43.650 "thread": "nvmf_tgt_poll_group_000", 00:15:43.650 "listen_address": { 00:15:43.650 "trtype": "TCP", 00:15:43.650 "adrfam": "IPv4", 00:15:43.650 "traddr": "10.0.0.2", 00:15:43.650 "trsvcid": "4420" 00:15:43.650 }, 00:15:43.650 "peer_address": { 00:15:43.650 "trtype": "TCP", 00:15:43.650 "adrfam": "IPv4", 00:15:43.650 "traddr": "10.0.0.1", 00:15:43.650 "trsvcid": "46690" 00:15:43.650 }, 00:15:43.650 "auth": { 00:15:43.650 "state": "completed", 00:15:43.650 "digest": "sha512", 00:15:43.650 "dhgroup": "ffdhe3072" 00:15:43.650 } 00:15:43.650 } 00:15:43.650 ]' 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.650 03:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.650 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.908 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.475 03:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.733 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.734 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.734 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.992 00:15:44.992 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.992 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.992 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.558 { 00:15:45.558 "cntlid": 115, 00:15:45.558 "qid": 0, 00:15:45.558 "state": "enabled", 00:15:45.558 "thread": "nvmf_tgt_poll_group_000", 00:15:45.558 "listen_address": { 00:15:45.558 "trtype": "TCP", 00:15:45.558 "adrfam": "IPv4", 00:15:45.558 "traddr": "10.0.0.2", 00:15:45.558 "trsvcid": "4420" 00:15:45.558 }, 00:15:45.558 "peer_address": { 00:15:45.558 "trtype": "TCP", 00:15:45.558 "adrfam": "IPv4", 00:15:45.558 "traddr": "10.0.0.1", 00:15:45.558 "trsvcid": "46728" 00:15:45.558 }, 00:15:45.558 "auth": { 00:15:45.558 "state": "completed", 00:15:45.558 "digest": "sha512", 00:15:45.558 "dhgroup": "ffdhe3072" 00:15:45.558 } 00:15:45.558 } 00:15:45.558 ]' 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.558 03:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.816 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.380 03:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.637 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.203 00:15:47.203 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.203 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.203 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.461 { 00:15:47.461 "cntlid": 117, 00:15:47.461 "qid": 0, 00:15:47.461 "state": "enabled", 00:15:47.461 "thread": "nvmf_tgt_poll_group_000", 00:15:47.461 "listen_address": { 00:15:47.461 "trtype": "TCP", 00:15:47.461 "adrfam": "IPv4", 00:15:47.461 "traddr": "10.0.0.2", 00:15:47.461 "trsvcid": "4420" 00:15:47.461 }, 00:15:47.461 "peer_address": { 00:15:47.461 "trtype": "TCP", 00:15:47.461 "adrfam": "IPv4", 00:15:47.461 "traddr": "10.0.0.1", 00:15:47.461 "trsvcid": "46744" 00:15:47.461 }, 00:15:47.461 "auth": { 00:15:47.461 "state": "completed", 00:15:47.461 "digest": "sha512", 00:15:47.461 "dhgroup": "ffdhe3072" 00:15:47.461 } 00:15:47.461 } 00:15:47.461 ]' 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.461 03:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.719 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:48.654 03:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.654 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.912 00:15:48.912 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.912 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.912 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.171 { 00:15:49.171 "cntlid": 119, 00:15:49.171 "qid": 0, 00:15:49.171 "state": "enabled", 00:15:49.171 "thread": "nvmf_tgt_poll_group_000", 00:15:49.171 "listen_address": { 00:15:49.171 "trtype": "TCP", 00:15:49.171 "adrfam": "IPv4", 00:15:49.171 "traddr": "10.0.0.2", 00:15:49.171 "trsvcid": "4420" 00:15:49.171 }, 00:15:49.171 "peer_address": { 00:15:49.171 "trtype": "TCP", 00:15:49.171 "adrfam": "IPv4", 00:15:49.171 "traddr": "10.0.0.1", 00:15:49.171 "trsvcid": "46760" 00:15:49.171 }, 00:15:49.171 "auth": { 00:15:49.171 "state": "completed", 00:15:49.171 "digest": "sha512", 00:15:49.171 "dhgroup": "ffdhe3072" 00:15:49.171 } 00:15:49.171 } 00:15:49.171 ]' 00:15:49.171 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.429 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.429 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.429 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.429 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.430 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.430 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.430 03:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.688 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.623 03:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.623 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.190 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.190 { 00:15:51.190 "cntlid": 121, 00:15:51.190 "qid": 0, 00:15:51.190 "state": "enabled", 00:15:51.190 "thread": "nvmf_tgt_poll_group_000", 00:15:51.190 "listen_address": { 00:15:51.190 "trtype": "TCP", 00:15:51.190 "adrfam": "IPv4", 00:15:51.190 "traddr": "10.0.0.2", 00:15:51.190 "trsvcid": "4420" 00:15:51.190 }, 00:15:51.190 "peer_address": { 00:15:51.190 "trtype": "TCP", 00:15:51.190 "adrfam": "IPv4", 00:15:51.190 "traddr": "10.0.0.1", 00:15:51.190 "trsvcid": "48492" 00:15:51.190 }, 00:15:51.190 "auth": { 00:15:51.190 "state": "completed", 00:15:51.190 "digest": "sha512", 00:15:51.190 "dhgroup": "ffdhe4096" 00:15:51.190 } 00:15:51.190 } 00:15:51.190 ]' 00:15:51.190 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.448 03:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.706 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.317 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.575 03:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.833 00:15:52.833 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.833 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.833 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.092 { 00:15:53.092 "cntlid": 123, 00:15:53.092 "qid": 0, 00:15:53.092 "state": "enabled", 00:15:53.092 "thread": "nvmf_tgt_poll_group_000", 00:15:53.092 "listen_address": { 00:15:53.092 "trtype": "TCP", 00:15:53.092 "adrfam": "IPv4", 00:15:53.092 "traddr": "10.0.0.2", 00:15:53.092 "trsvcid": "4420" 00:15:53.092 }, 00:15:53.092 "peer_address": { 00:15:53.092 "trtype": "TCP", 00:15:53.092 "adrfam": "IPv4", 00:15:53.092 "traddr": "10.0.0.1", 00:15:53.092 "trsvcid": "48518" 00:15:53.092 }, 00:15:53.092 "auth": { 00:15:53.092 "state": "completed", 00:15:53.092 "digest": "sha512", 00:15:53.092 "dhgroup": "ffdhe4096" 00:15:53.092 } 00:15:53.092 } 00:15:53.092 ]' 00:15:53.092 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.350 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.609 03:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.175 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.176 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.435 03:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.002 00:15:55.002 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.002 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.002 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.260 { 00:15:55.260 "cntlid": 125, 00:15:55.260 "qid": 0, 00:15:55.260 "state": "enabled", 00:15:55.260 "thread": "nvmf_tgt_poll_group_000", 00:15:55.260 "listen_address": { 00:15:55.260 "trtype": "TCP", 00:15:55.260 "adrfam": "IPv4", 00:15:55.260 "traddr": "10.0.0.2", 00:15:55.260 "trsvcid": "4420" 00:15:55.260 }, 00:15:55.260 "peer_address": { 00:15:55.260 "trtype": "TCP", 00:15:55.260 "adrfam": "IPv4", 00:15:55.260 "traddr": "10.0.0.1", 00:15:55.260 "trsvcid": "48554" 00:15:55.260 }, 00:15:55.260 "auth": { 00:15:55.260 "state": "completed", 00:15:55.260 "digest": "sha512", 00:15:55.260 "dhgroup": "ffdhe4096" 00:15:55.260 } 00:15:55.260 } 00:15:55.260 ]' 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.260 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.519 03:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.454 03:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.020 00:15:57.020 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.020 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.020 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.278 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.278 { 00:15:57.278 "cntlid": 127, 00:15:57.278 "qid": 0, 00:15:57.278 "state": "enabled", 00:15:57.278 "thread": "nvmf_tgt_poll_group_000", 00:15:57.278 "listen_address": { 00:15:57.278 "trtype": "TCP", 00:15:57.278 "adrfam": "IPv4", 00:15:57.278 "traddr": "10.0.0.2", 00:15:57.278 "trsvcid": "4420" 00:15:57.278 }, 00:15:57.278 "peer_address": { 00:15:57.278 "trtype": "TCP", 00:15:57.278 "adrfam": "IPv4", 00:15:57.278 "traddr": "10.0.0.1", 00:15:57.278 "trsvcid": "48588" 00:15:57.279 }, 00:15:57.279 "auth": { 00:15:57.279 "state": "completed", 00:15:57.279 "digest": "sha512", 00:15:57.279 "dhgroup": "ffdhe4096" 00:15:57.279 } 00:15:57.279 } 00:15:57.279 ]' 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.279 03:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.537 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.471 03:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.730 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.989 00:15:59.248 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.248 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.248 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.507 { 00:15:59.507 "cntlid": 129, 00:15:59.507 "qid": 0, 00:15:59.507 "state": "enabled", 00:15:59.507 "thread": "nvmf_tgt_poll_group_000", 00:15:59.507 "listen_address": { 00:15:59.507 "trtype": "TCP", 00:15:59.507 "adrfam": "IPv4", 00:15:59.507 "traddr": "10.0.0.2", 00:15:59.507 "trsvcid": "4420" 00:15:59.507 }, 00:15:59.507 "peer_address": { 00:15:59.507 "trtype": "TCP", 00:15:59.507 "adrfam": "IPv4", 00:15:59.507 "traddr": "10.0.0.1", 00:15:59.507 "trsvcid": "48600" 00:15:59.507 }, 00:15:59.507 "auth": { 00:15:59.507 "state": "completed", 00:15:59.507 "digest": "sha512", 00:15:59.507 "dhgroup": "ffdhe6144" 00:15:59.507 } 00:15:59.507 } 00:15:59.507 ]' 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.507 03:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.765 03:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:16:00.701 03:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.701 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.960 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.527 00:16:01.527 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.527 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.527 03:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.785 { 00:16:01.785 "cntlid": 131, 00:16:01.785 "qid": 0, 00:16:01.785 "state": "enabled", 00:16:01.785 "thread": "nvmf_tgt_poll_group_000", 00:16:01.785 "listen_address": { 00:16:01.785 "trtype": "TCP", 00:16:01.785 "adrfam": "IPv4", 00:16:01.785 "traddr": "10.0.0.2", 00:16:01.785 "trsvcid": "4420" 00:16:01.785 }, 00:16:01.785 "peer_address": { 00:16:01.785 "trtype": "TCP", 00:16:01.785 "adrfam": "IPv4", 00:16:01.785 "traddr": "10.0.0.1", 00:16:01.785 "trsvcid": "42116" 00:16:01.785 }, 00:16:01.785 "auth": { 00:16:01.785 "state": "completed", 00:16:01.785 "digest": "sha512", 00:16:01.785 "dhgroup": "ffdhe6144" 00:16:01.785 } 00:16:01.785 } 00:16:01.785 ]' 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.785 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.044 03:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.979 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.545 00:16:03.545 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.545 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.545 03:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.802 { 00:16:03.802 "cntlid": 133, 00:16:03.802 "qid": 0, 00:16:03.802 "state": "enabled", 00:16:03.802 "thread": "nvmf_tgt_poll_group_000", 00:16:03.802 "listen_address": { 00:16:03.802 "trtype": "TCP", 00:16:03.802 "adrfam": "IPv4", 00:16:03.802 "traddr": "10.0.0.2", 00:16:03.802 "trsvcid": "4420" 00:16:03.802 }, 00:16:03.802 "peer_address": { 00:16:03.802 "trtype": "TCP", 00:16:03.802 "adrfam": "IPv4", 00:16:03.802 "traddr": "10.0.0.1", 00:16:03.802 "trsvcid": "42134" 00:16:03.802 }, 00:16:03.802 "auth": { 00:16:03.802 "state": "completed", 00:16:03.802 "digest": "sha512", 00:16:03.802 "dhgroup": "ffdhe6144" 00:16:03.802 } 00:16:03.802 } 00:16:03.802 ]' 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.802 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.059 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.059 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.059 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.059 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.059 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.316 03:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.881 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.139 03:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.705 00:16:05.705 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.705 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.705 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.964 { 00:16:05.964 "cntlid": 135, 00:16:05.964 "qid": 0, 00:16:05.964 "state": "enabled", 00:16:05.964 "thread": "nvmf_tgt_poll_group_000", 00:16:05.964 "listen_address": { 00:16:05.964 "trtype": "TCP", 00:16:05.964 "adrfam": "IPv4", 00:16:05.964 "traddr": "10.0.0.2", 00:16:05.964 "trsvcid": "4420" 00:16:05.964 }, 00:16:05.964 "peer_address": { 00:16:05.964 "trtype": "TCP", 00:16:05.964 "adrfam": "IPv4", 00:16:05.964 "traddr": "10.0.0.1", 00:16:05.964 "trsvcid": "42158" 00:16:05.964 }, 00:16:05.964 "auth": { 00:16:05.964 "state": "completed", 00:16:05.964 "digest": "sha512", 00:16:05.964 "dhgroup": "ffdhe6144" 00:16:05.964 } 00:16:05.964 } 00:16:05.964 ]' 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.964 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.224 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.224 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.224 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.224 03:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.157 03:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.724 00:16:07.724 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.724 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.724 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.982 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.982 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.982 03:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.982 03:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.241 { 00:16:08.241 "cntlid": 137, 00:16:08.241 "qid": 0, 00:16:08.241 "state": "enabled", 00:16:08.241 "thread": "nvmf_tgt_poll_group_000", 00:16:08.241 "listen_address": { 00:16:08.241 "trtype": "TCP", 00:16:08.241 "adrfam": "IPv4", 00:16:08.241 "traddr": "10.0.0.2", 00:16:08.241 "trsvcid": "4420" 00:16:08.241 }, 00:16:08.241 "peer_address": { 00:16:08.241 "trtype": "TCP", 00:16:08.241 "adrfam": "IPv4", 00:16:08.241 "traddr": "10.0.0.1", 00:16:08.241 "trsvcid": "42188" 00:16:08.241 }, 00:16:08.241 "auth": { 00:16:08.241 "state": "completed", 00:16:08.241 "digest": "sha512", 00:16:08.241 "dhgroup": "ffdhe8192" 00:16:08.241 } 00:16:08.241 } 00:16:08.241 ]' 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.241 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.500 03:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.435 03:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.002 00:16:10.002 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.002 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.002 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.568 { 00:16:10.568 "cntlid": 139, 00:16:10.568 "qid": 0, 00:16:10.568 "state": "enabled", 00:16:10.568 "thread": "nvmf_tgt_poll_group_000", 00:16:10.568 "listen_address": { 00:16:10.568 "trtype": "TCP", 00:16:10.568 "adrfam": "IPv4", 00:16:10.568 "traddr": "10.0.0.2", 00:16:10.568 "trsvcid": "4420" 00:16:10.568 }, 00:16:10.568 "peer_address": { 00:16:10.568 "trtype": "TCP", 00:16:10.568 "adrfam": "IPv4", 00:16:10.568 "traddr": "10.0.0.1", 00:16:10.568 "trsvcid": "54656" 00:16:10.568 }, 00:16:10.568 "auth": { 00:16:10.568 "state": "completed", 00:16:10.568 "digest": "sha512", 00:16:10.568 "dhgroup": "ffdhe8192" 00:16:10.568 } 00:16:10.568 } 00:16:10.568 ]' 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.568 03:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.827 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:01:MWJhZDE1NWVhZWViYjM5MDhjMGZkNmQzZjdjYTJkM2R6X0UU: --dhchap-ctrl-secret DHHC-1:02:ODI3N2M4OTJlNTkwOTNiYTg1YWI1Mjc5MGIxMTE2MTJiNDY2NzdjYzQ2ZGUwYTQ2Cyryqw==: 00:16:11.394 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.653 03:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.653 03:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.912 03:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.912 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.912 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.492 00:16:12.492 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.492 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.492 03:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.758 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.759 { 00:16:12.759 "cntlid": 141, 00:16:12.759 "qid": 0, 00:16:12.759 "state": "enabled", 00:16:12.759 "thread": "nvmf_tgt_poll_group_000", 00:16:12.759 "listen_address": { 00:16:12.759 "trtype": "TCP", 00:16:12.759 "adrfam": "IPv4", 00:16:12.759 "traddr": "10.0.0.2", 00:16:12.759 "trsvcid": "4420" 00:16:12.759 }, 00:16:12.759 "peer_address": { 00:16:12.759 "trtype": "TCP", 00:16:12.759 "adrfam": "IPv4", 00:16:12.759 "traddr": "10.0.0.1", 00:16:12.759 "trsvcid": "54668" 00:16:12.759 }, 00:16:12.759 "auth": { 00:16:12.759 "state": "completed", 00:16:12.759 "digest": "sha512", 00:16:12.759 "dhgroup": "ffdhe8192" 00:16:12.759 } 00:16:12.759 } 00:16:12.759 ]' 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.759 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.017 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.017 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.017 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.276 03:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:02:YzI0ODFiNjUzZGM2YjJhZTM2NTBlZTI1YzgzMTk0YTlmYmIxMjExMzk3ZjRiMTg4H0Aceg==: --dhchap-ctrl-secret DHHC-1:01:MDc4MGQzNWU5MGU2M2UzZTBlZDFhYjgyYjAzY2IzODLSaxVa: 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:13.843 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.102 03:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.670 00:16:14.670 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.670 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.670 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.930 { 00:16:14.930 "cntlid": 143, 00:16:14.930 "qid": 0, 00:16:14.930 "state": "enabled", 00:16:14.930 "thread": "nvmf_tgt_poll_group_000", 00:16:14.930 "listen_address": { 00:16:14.930 "trtype": "TCP", 00:16:14.930 "adrfam": "IPv4", 00:16:14.930 "traddr": "10.0.0.2", 00:16:14.930 "trsvcid": "4420" 00:16:14.930 }, 00:16:14.930 "peer_address": { 00:16:14.930 "trtype": "TCP", 00:16:14.930 "adrfam": "IPv4", 00:16:14.930 "traddr": "10.0.0.1", 00:16:14.930 "trsvcid": "54694" 00:16:14.930 }, 00:16:14.930 "auth": { 00:16:14.930 "state": "completed", 00:16:14.930 "digest": "sha512", 00:16:14.930 "dhgroup": "ffdhe8192" 00:16:14.930 } 00:16:14.930 } 00:16:14.930 ]' 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.930 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.189 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.449 03:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.015 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.273 03:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.274 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.274 03:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.839 00:16:16.839 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.839 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.839 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.096 { 00:16:17.096 "cntlid": 145, 00:16:17.096 "qid": 0, 00:16:17.096 "state": "enabled", 00:16:17.096 "thread": "nvmf_tgt_poll_group_000", 00:16:17.096 "listen_address": { 00:16:17.096 "trtype": "TCP", 00:16:17.096 "adrfam": "IPv4", 00:16:17.096 "traddr": "10.0.0.2", 00:16:17.096 "trsvcid": "4420" 00:16:17.096 }, 00:16:17.096 "peer_address": { 00:16:17.096 "trtype": "TCP", 00:16:17.096 "adrfam": "IPv4", 00:16:17.096 "traddr": "10.0.0.1", 00:16:17.096 "trsvcid": "54734" 00:16:17.096 }, 00:16:17.096 "auth": { 00:16:17.096 "state": "completed", 00:16:17.096 "digest": "sha512", 00:16:17.096 "dhgroup": "ffdhe8192" 00:16:17.096 } 00:16:17.096 } 00:16:17.096 ]' 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.096 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.353 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.353 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.353 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.610 03:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:00:YTlhMWQ5NTExNGM2MTZkMWEwMzBiYjAxNGNkOWFkNmI5ZGUwMTA4Yzk4ZjIyODkxP4WReA==: --dhchap-ctrl-secret DHHC-1:03:NzMzOGQ0ZjExNjVjMWExMGI4Njg1NGYxNWQ2ZjQwYjgwM2JkNWU4ZDJjNzU2YjgyY2I1NzcyMTYzN2VkZTU5Mx+VvAc=: 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:18.175 03:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:18.741 request: 00:16:18.741 { 00:16:18.741 "name": "nvme0", 00:16:18.741 "trtype": "tcp", 00:16:18.741 "traddr": "10.0.0.2", 00:16:18.741 "adrfam": "ipv4", 00:16:18.741 "trsvcid": "4420", 00:16:18.741 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:18.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:18.741 "prchk_reftag": false, 00:16:18.741 "prchk_guard": false, 00:16:18.741 "hdgst": false, 00:16:18.741 "ddgst": false, 00:16:18.741 "dhchap_key": "key2", 00:16:18.741 "method": "bdev_nvme_attach_controller", 00:16:18.741 "req_id": 1 00:16:18.741 } 00:16:18.741 Got JSON-RPC error response 00:16:18.741 response: 00:16:18.741 { 00:16:18.741 "code": -5, 00:16:18.741 "message": "Input/output error" 00:16:18.741 } 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:18.741 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.308 request: 00:16:19.308 { 00:16:19.308 "name": "nvme0", 00:16:19.308 "trtype": "tcp", 00:16:19.308 "traddr": "10.0.0.2", 00:16:19.308 "adrfam": "ipv4", 00:16:19.308 "trsvcid": "4420", 00:16:19.308 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:19.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:19.308 "prchk_reftag": false, 00:16:19.308 "prchk_guard": false, 00:16:19.308 "hdgst": false, 00:16:19.308 "ddgst": false, 00:16:19.308 "dhchap_key": "key1", 00:16:19.308 "dhchap_ctrlr_key": "ckey2", 00:16:19.308 "method": "bdev_nvme_attach_controller", 00:16:19.308 "req_id": 1 00:16:19.308 } 00:16:19.308 Got JSON-RPC error response 00:16:19.308 response: 00:16:19.308 { 00:16:19.308 "code": -5, 00:16:19.308 "message": "Input/output error" 00:16:19.308 } 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.308 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key1 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.566 03:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.132 request: 00:16:20.132 { 00:16:20.132 "name": "nvme0", 00:16:20.132 "trtype": "tcp", 00:16:20.132 "traddr": "10.0.0.2", 00:16:20.132 "adrfam": "ipv4", 00:16:20.132 "trsvcid": "4420", 00:16:20.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:20.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:20.132 "prchk_reftag": false, 00:16:20.132 "prchk_guard": false, 00:16:20.132 "hdgst": false, 00:16:20.132 "ddgst": false, 00:16:20.132 "dhchap_key": "key1", 00:16:20.132 "dhchap_ctrlr_key": "ckey1", 00:16:20.132 "method": "bdev_nvme_attach_controller", 00:16:20.132 "req_id": 1 00:16:20.132 } 00:16:20.132 Got JSON-RPC error response 00:16:20.132 response: 00:16:20.132 { 00:16:20.132 "code": -5, 00:16:20.132 "message": "Input/output error" 00:16:20.132 } 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 72038 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72038 ']' 00:16:20.132 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72038 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72038 00:16:20.133 killing process with pid 72038 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72038' 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72038 00:16:20.133 03:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72038 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=74970 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 74970 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 74970 ']' 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.067 03:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 74970 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 74970 ']' 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.444 03:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.703 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.641 00:16:23.641 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.641 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.641 03:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.641 { 00:16:23.641 "cntlid": 1, 00:16:23.641 "qid": 0, 00:16:23.641 "state": "enabled", 00:16:23.641 "thread": "nvmf_tgt_poll_group_000", 00:16:23.641 "listen_address": { 00:16:23.641 "trtype": "TCP", 00:16:23.641 "adrfam": "IPv4", 00:16:23.641 "traddr": "10.0.0.2", 00:16:23.641 "trsvcid": "4420" 00:16:23.641 }, 00:16:23.641 "peer_address": { 00:16:23.641 "trtype": "TCP", 00:16:23.641 "adrfam": "IPv4", 00:16:23.641 "traddr": "10.0.0.1", 00:16:23.641 "trsvcid": "52306" 00:16:23.641 }, 00:16:23.641 "auth": { 00:16:23.641 "state": "completed", 00:16:23.641 "digest": "sha512", 00:16:23.641 "dhgroup": "ffdhe8192" 00:16:23.641 } 00:16:23.641 } 00:16:23.641 ]' 00:16:23.641 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.900 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.159 03:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-secret DHHC-1:03:ZTk1N2ZlMmQ1NmM1MmRkZGVjMTRiY2YyY2ZlNTZkMDliM2Q2MWM0NWEzNmMxZmZiNTMwNGM4Nzg1YTU4ZTViOBddEoY=: 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --dhchap-key key3 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.101 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.102 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.361 request: 00:16:25.361 { 00:16:25.361 "name": "nvme0", 00:16:25.361 "trtype": "tcp", 00:16:25.361 "traddr": "10.0.0.2", 00:16:25.361 "adrfam": "ipv4", 00:16:25.361 "trsvcid": "4420", 00:16:25.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:25.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:25.361 "prchk_reftag": false, 00:16:25.361 "prchk_guard": false, 00:16:25.361 "hdgst": false, 00:16:25.361 "ddgst": false, 00:16:25.361 "dhchap_key": "key3", 00:16:25.361 "method": "bdev_nvme_attach_controller", 00:16:25.361 "req_id": 1 00:16:25.361 } 00:16:25.361 Got JSON-RPC error response 00:16:25.361 response: 00:16:25.361 { 00:16:25.361 "code": -5, 00:16:25.361 "message": "Input/output error" 00:16:25.361 } 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:25.361 03:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:25.619 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.619 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:25.620 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.620 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:25.620 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.620 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.879 request: 00:16:25.879 { 00:16:25.879 "name": "nvme0", 00:16:25.879 "trtype": "tcp", 00:16:25.879 "traddr": "10.0.0.2", 00:16:25.879 "adrfam": "ipv4", 00:16:25.879 "trsvcid": "4420", 00:16:25.879 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:25.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:25.879 "prchk_reftag": false, 00:16:25.879 "prchk_guard": false, 00:16:25.879 "hdgst": false, 00:16:25.879 "ddgst": false, 00:16:25.879 "dhchap_key": "key3", 00:16:25.879 "method": "bdev_nvme_attach_controller", 00:16:25.879 "req_id": 1 00:16:25.879 } 00:16:25.879 Got JSON-RPC error response 00:16:25.879 response: 00:16:25.879 { 00:16:25.879 "code": -5, 00:16:25.879 "message": "Input/output error" 00:16:25.879 } 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.879 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:26.138 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:26.397 request: 00:16:26.397 { 00:16:26.397 "name": "nvme0", 00:16:26.397 "trtype": "tcp", 00:16:26.397 "traddr": "10.0.0.2", 00:16:26.397 "adrfam": "ipv4", 00:16:26.397 "trsvcid": "4420", 00:16:26.397 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:26.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208", 00:16:26.397 "prchk_reftag": false, 00:16:26.397 "prchk_guard": false, 00:16:26.397 "hdgst": false, 00:16:26.397 "ddgst": false, 00:16:26.397 "dhchap_key": "key0", 00:16:26.397 "dhchap_ctrlr_key": "key1", 00:16:26.397 "method": "bdev_nvme_attach_controller", 00:16:26.397 "req_id": 1 00:16:26.397 } 00:16:26.397 Got JSON-RPC error response 00:16:26.397 response: 00:16:26.397 { 00:16:26.397 "code": -5, 00:16:26.397 "message": "Input/output error" 00:16:26.397 } 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:26.397 03:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:26.656 00:16:26.914 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:26.914 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:26.914 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.172 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.172 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 72070 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72070 ']' 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72070 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.173 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72070 00:16:27.431 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:27.431 killing process with pid 72070 00:16:27.431 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:27.431 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72070' 00:16:27.431 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72070 00:16:27.431 03:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72070 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.338 rmmod nvme_tcp 00:16:29.338 rmmod nvme_fabrics 00:16:29.338 rmmod nvme_keyring 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 74970 ']' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 74970 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 74970 ']' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 74970 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74970 00:16:29.338 killing process with pid 74970 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74970' 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 74970 00:16:29.338 03:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 74970 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.275 03:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.564 03:05:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:30.564 03:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.t9h /tmp/spdk.key-sha256.0Sl /tmp/spdk.key-sha384.kIg /tmp/spdk.key-sha512.EEX /tmp/spdk.key-sha512.ZOs /tmp/spdk.key-sha384.Weo /tmp/spdk.key-sha256.Irv '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:30.564 00:16:30.564 real 2m44.348s 00:16:30.564 user 6m32.797s 00:16:30.564 sys 0m23.368s 00:16:30.564 03:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:30.564 ************************************ 00:16:30.564 END TEST nvmf_auth_target 00:16:30.564 ************************************ 00:16:30.564 03:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 03:05:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:30.565 03:05:36 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:30.565 03:05:36 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:30.565 03:05:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:30.565 03:05:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.565 03:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 ************************************ 00:16:30.565 START TEST nvmf_bdevio_no_huge 00:16:30.565 ************************************ 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:30.565 * Looking for test storage... 00:16:30.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:30.565 Cannot find device "nvmf_tgt_br" 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.565 Cannot find device "nvmf_tgt_br2" 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:30.565 03:05:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:30.565 Cannot find device "nvmf_tgt_br" 00:16:30.565 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:16:30.565 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:30.565 Cannot find device "nvmf_tgt_br2" 00:16:30.565 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:16:30.565 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:30.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:30.834 00:16:30.834 --- 10.0.0.2 ping statistics --- 00:16:30.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.834 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:30.834 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:30.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:30.835 00:16:30.835 --- 10.0.0.3 ping statistics --- 00:16:30.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.835 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:30.835 00:16:30.835 --- 10.0.0.1 ping statistics --- 00:16:30.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.835 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=75321 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 75321 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 75321 ']' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.835 03:05:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.093 [2024-07-13 03:05:37.387302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:31.093 [2024-07-13 03:05:37.387462] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:31.093 [2024-07-13 03:05:37.565817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.352 [2024-07-13 03:05:37.781128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.352 [2024-07-13 03:05:37.781191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.352 [2024-07-13 03:05:37.781214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.352 [2024-07-13 03:05:37.781226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.352 [2024-07-13 03:05:37.781241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.352 [2024-07-13 03:05:37.781378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.352 [2024-07-13 03:05:37.781496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:31.352 [2024-07-13 03:05:37.781655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:31.352 [2024-07-13 03:05:37.782071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.611 [2024-07-13 03:05:37.924858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 [2024-07-13 03:05:38.268519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 Malloc0 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.870 [2024-07-13 03:05:38.356706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:31.870 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:32.128 { 00:16:32.128 "params": { 00:16:32.128 "name": "Nvme$subsystem", 00:16:32.128 "trtype": "$TEST_TRANSPORT", 00:16:32.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.128 "adrfam": "ipv4", 00:16:32.128 "trsvcid": "$NVMF_PORT", 00:16:32.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.128 "hdgst": ${hdgst:-false}, 00:16:32.128 "ddgst": ${ddgst:-false} 00:16:32.128 }, 00:16:32.128 "method": "bdev_nvme_attach_controller" 00:16:32.128 } 00:16:32.128 EOF 00:16:32.128 )") 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:32.128 03:05:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:32.128 "params": { 00:16:32.128 "name": "Nvme1", 00:16:32.128 "trtype": "tcp", 00:16:32.128 "traddr": "10.0.0.2", 00:16:32.128 "adrfam": "ipv4", 00:16:32.128 "trsvcid": "4420", 00:16:32.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.128 "hdgst": false, 00:16:32.128 "ddgst": false 00:16:32.128 }, 00:16:32.128 "method": "bdev_nvme_attach_controller" 00:16:32.128 }' 00:16:32.128 [2024-07-13 03:05:38.466830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:32.128 [2024-07-13 03:05:38.467073] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75357 ] 00:16:32.386 [2024-07-13 03:05:38.666488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.643 [2024-07-13 03:05:38.952511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.643 [2024-07-13 03:05:38.952669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.643 [2024-07-13 03:05:38.952680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.643 [2024-07-13 03:05:39.115348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:32.901 I/O targets: 00:16:32.901 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:32.901 00:16:32.901 00:16:32.901 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.901 http://cunit.sourceforge.net/ 00:16:32.901 00:16:32.901 00:16:32.901 Suite: bdevio tests on: Nvme1n1 00:16:32.901 Test: blockdev write read block ...passed 00:16:32.901 Test: blockdev write zeroes read block ...passed 00:16:32.901 Test: blockdev write zeroes read no split ...passed 00:16:32.901 Test: blockdev write zeroes read split ...passed 00:16:33.160 Test: blockdev write zeroes read split partial ...passed 00:16:33.160 Test: blockdev reset ...[2024-07-13 03:05:39.403442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:33.160 [2024-07-13 03:05:39.403641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:33.160 [2024-07-13 03:05:39.418074] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:33.160 passed 00:16:33.160 Test: blockdev write read 8 blocks ...passed 00:16:33.160 Test: blockdev write read size > 128k ...passed 00:16:33.160 Test: blockdev write read invalid size ...passed 00:16:33.160 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:33.160 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:33.160 Test: blockdev write read max offset ...passed 00:16:33.160 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:33.160 Test: blockdev writev readv 8 blocks ...passed 00:16:33.160 Test: blockdev writev readv 30 x 1block ...passed 00:16:33.160 Test: blockdev writev readv block ...passed 00:16:33.160 Test: blockdev writev readv size > 128k ...passed 00:16:33.160 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:33.160 Test: blockdev comparev and writev ...[2024-07-13 03:05:39.429881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.429957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.429991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.430461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.430521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.430551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.430571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.431059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.431101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.431129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.431151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.431574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.431617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.160 [2024-07-13 03:05:39.431679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.160 passed 00:16:33.160 Test: blockdev nvme passthru rw ...passed 00:16:33.160 Test: blockdev nvme passthru vendor specific ...[2024-07-13 03:05:39.432946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.160 [2024-07-13 03:05:39.432988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.433176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.160 [2024-07-13 03:05:39.433220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.433370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.160 [2024-07-13 03:05:39.433413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.160 [2024-07-13 03:05:39.433574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.160 [2024-07-13 03:05:39.433608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.160 passed 00:16:33.160 Test: blockdev nvme admin passthru ...passed 00:16:33.160 Test: blockdev copy ...passed 00:16:33.160 00:16:33.160 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.160 suites 1 1 n/a 0 0 00:16:33.160 tests 23 23 23 0 0 00:16:33.160 asserts 152 152 152 0 n/a 00:16:33.160 00:16:33.160 Elapsed time = 0.271 seconds 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.728 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.728 rmmod nvme_tcp 00:16:33.728 rmmod nvme_fabrics 00:16:33.987 rmmod nvme_keyring 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 75321 ']' 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 75321 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 75321 ']' 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 75321 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75321 00:16:33.987 killing process with pid 75321 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75321' 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 75321 00:16:33.987 03:05:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 75321 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:34.924 00:16:34.924 real 0m4.257s 00:16:34.924 user 0m15.312s 00:16:34.924 sys 0m1.329s 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.924 ************************************ 00:16:34.924 END TEST nvmf_bdevio_no_huge 00:16:34.924 03:05:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:34.924 ************************************ 00:16:34.924 03:05:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.924 03:05:41 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:34.924 03:05:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.924 03:05:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.924 03:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.924 ************************************ 00:16:34.924 START TEST nvmf_tls 00:16:34.924 ************************************ 00:16:34.924 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:34.924 * Looking for test storage... 00:16:34.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:34.925 Cannot find device "nvmf_tgt_br" 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.925 Cannot find device "nvmf_tgt_br2" 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:34.925 Cannot find device "nvmf_tgt_br" 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:34.925 Cannot find device "nvmf_tgt_br2" 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.925 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:35.185 00:16:35.185 --- 10.0.0.2 ping statistics --- 00:16:35.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.185 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:35.185 00:16:35.185 --- 10.0.0.3 ping statistics --- 00:16:35.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.185 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:35.185 00:16:35.185 --- 10.0.0.1 ping statistics --- 00:16:35.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.185 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=75546 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 75546 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75546 ']' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.185 03:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.445 [2024-07-13 03:05:41.728429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:35.445 [2024-07-13 03:05:41.728618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.445 [2024-07-13 03:05:41.911624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.703 [2024-07-13 03:05:42.073598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.703 [2024-07-13 03:05:42.073687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.703 [2024-07-13 03:05:42.073701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.703 [2024-07-13 03:05:42.073713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.703 [2024-07-13 03:05:42.073723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.704 [2024-07-13 03:05:42.073758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:36.272 03:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:36.532 true 00:16:36.532 03:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:36.532 03:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:36.791 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:36.791 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:36.791 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:37.050 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:37.050 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:37.308 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:37.308 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:37.308 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:37.566 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:37.566 03:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:37.824 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:37.824 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:37.824 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:37.824 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:38.082 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:38.082 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:38.082 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:38.340 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:38.340 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:38.599 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:38.599 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:38.599 03:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:38.857 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:38.857 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:39.115 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.unf3Fg6ZTR 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.4cmnibX8D9 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.unf3Fg6ZTR 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4cmnibX8D9 00:16:39.116 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:39.375 03:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:39.634 [2024-07-13 03:05:46.079852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.893 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.unf3Fg6ZTR 00:16:39.893 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.unf3Fg6ZTR 00:16:39.894 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:39.894 [2024-07-13 03:05:46.365508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.894 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:40.153 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:40.412 [2024-07-13 03:05:46.797601] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:40.412 [2024-07-13 03:05:46.797929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.412 03:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:40.671 malloc0 00:16:40.671 03:05:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:40.930 03:05:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.unf3Fg6ZTR 00:16:41.189 [2024-07-13 03:05:47.506010] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:41.189 03:05:47 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.unf3Fg6ZTR 00:16:53.394 Initializing NVMe Controllers 00:16:53.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:53.394 Initialization complete. Launching workers. 00:16:53.394 ======================================================== 00:16:53.394 Latency(us) 00:16:53.394 Device Information : IOPS MiB/s Average min max 00:16:53.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7022.87 27.43 9116.08 1667.85 10827.70 00:16:53.394 ======================================================== 00:16:53.394 Total : 7022.87 27.43 9116.08 1667.85 10827.70 00:16:53.394 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.unf3Fg6ZTR 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.unf3Fg6ZTR' 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75782 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75782 /var/tmp/bdevperf.sock 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75782 ']' 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.394 03:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.394 [2024-07-13 03:05:57.999304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:53.394 [2024-07-13 03:05:57.999478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75782 ] 00:16:53.394 [2024-07-13 03:05:58.158811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.394 [2024-07-13 03:05:58.391551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.394 [2024-07-13 03:05:58.556102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:53.394 03:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.394 03:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:53.394 03:05:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.unf3Fg6ZTR 00:16:53.394 [2024-07-13 03:05:59.128475] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.394 [2024-07-13 03:05:59.128676] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:53.394 TLSTESTn1 00:16:53.394 03:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:53.394 Running I/O for 10 seconds... 00:17:03.388 00:17:03.388 Latency(us) 00:17:03.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.388 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:03.388 Verification LBA range: start 0x0 length 0x2000 00:17:03.388 TLSTESTn1 : 10.04 3048.51 11.91 0.00 0.00 41897.78 10664.49 26095.24 00:17:03.388 =================================================================================================================== 00:17:03.388 Total : 3048.51 11.91 0.00 0.00 41897.78 10664.49 26095.24 00:17:03.388 0 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 75782 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75782 ']' 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75782 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75782 00:17:03.388 killing process with pid 75782 00:17:03.388 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.388 00:17:03.388 Latency(us) 00:17:03.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.388 =================================================================================================================== 00:17:03.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75782' 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75782 00:17:03.388 [2024-07-13 03:06:09.410666] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:03.388 03:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75782 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4cmnibX8D9 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4cmnibX8D9 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4cmnibX8D9 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4cmnibX8D9' 00:17:04.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75922 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75922 /var/tmp/bdevperf.sock 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75922 ']' 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.326 03:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.326 [2024-07-13 03:06:10.611431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:04.326 [2024-07-13 03:06:10.611630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75922 ] 00:17:04.326 [2024-07-13 03:06:10.779742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.586 [2024-07-13 03:06:10.952220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.845 [2024-07-13 03:06:11.120299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:05.105 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.105 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:05.105 03:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4cmnibX8D9 00:17:05.364 [2024-07-13 03:06:11.720824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.364 [2024-07-13 03:06:11.721094] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:05.364 [2024-07-13 03:06:11.735736] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:05.364 [2024-07-13 03:06:11.736597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:05.364 [2024-07-13 03:06:11.737570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:05.364 [2024-07-13 03:06:11.738568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:05.364 [2024-07-13 03:06:11.738613] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:05.365 [2024-07-13 03:06:11.738648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:05.365 request: 00:17:05.365 { 00:17:05.365 "name": "TLSTEST", 00:17:05.365 "trtype": "tcp", 00:17:05.365 "traddr": "10.0.0.2", 00:17:05.365 "adrfam": "ipv4", 00:17:05.365 "trsvcid": "4420", 00:17:05.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.365 "prchk_reftag": false, 00:17:05.365 "prchk_guard": false, 00:17:05.365 "hdgst": false, 00:17:05.365 "ddgst": false, 00:17:05.365 "psk": "/tmp/tmp.4cmnibX8D9", 00:17:05.365 "method": "bdev_nvme_attach_controller", 00:17:05.365 "req_id": 1 00:17:05.365 } 00:17:05.365 Got JSON-RPC error response 00:17:05.365 response: 00:17:05.365 { 00:17:05.365 "code": -5, 00:17:05.365 "message": "Input/output error" 00:17:05.365 } 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 75922 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75922 ']' 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75922 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75922 00:17:05.365 killing process with pid 75922 00:17:05.365 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.365 00:17:05.365 Latency(us) 00:17:05.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.365 =================================================================================================================== 00:17:05.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75922' 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75922 00:17:05.365 [2024-07-13 03:06:11.785289] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:05.365 03:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75922 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.unf3Fg6ZTR 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.unf3Fg6ZTR 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.unf3Fg6ZTR 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.unf3Fg6ZTR' 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75962 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75962 /var/tmp/bdevperf.sock 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75962 ']' 00:17:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.306 03:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.564 [2024-07-13 03:06:12.859706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:06.564 [2024-07-13 03:06:12.859938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:17:06.564 [2024-07-13 03:06:13.032049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.821 [2024-07-13 03:06:13.186304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.079 [2024-07-13 03:06:13.337650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:07.338 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.338 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:07.338 03:06:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.unf3Fg6ZTR 00:17:07.598 [2024-07-13 03:06:13.906649] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.598 [2024-07-13 03:06:13.906837] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:07.598 [2024-07-13 03:06:13.915783] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:07.598 [2024-07-13 03:06:13.915843] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:07.598 [2024-07-13 03:06:13.915938] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:07.598 [2024-07-13 03:06:13.916047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:07.598 [2024-07-13 03:06:13.917020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:07.598 [2024-07-13 03:06:13.918020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:07.598 [2024-07-13 03:06:13.918069] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:07.598 [2024-07-13 03:06:13.918107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:07.598 request: 00:17:07.598 { 00:17:07.598 "name": "TLSTEST", 00:17:07.598 "trtype": "tcp", 00:17:07.598 "traddr": "10.0.0.2", 00:17:07.598 "adrfam": "ipv4", 00:17:07.598 "trsvcid": "4420", 00:17:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.598 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:07.598 "prchk_reftag": false, 00:17:07.598 "prchk_guard": false, 00:17:07.598 "hdgst": false, 00:17:07.598 "ddgst": false, 00:17:07.598 "psk": "/tmp/tmp.unf3Fg6ZTR", 00:17:07.598 "method": "bdev_nvme_attach_controller", 00:17:07.598 "req_id": 1 00:17:07.598 } 00:17:07.598 Got JSON-RPC error response 00:17:07.598 response: 00:17:07.598 { 00:17:07.598 "code": -5, 00:17:07.598 "message": "Input/output error" 00:17:07.598 } 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 75962 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75962 ']' 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75962 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75962 00:17:07.598 killing process with pid 75962 00:17:07.598 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.598 00:17:07.598 Latency(us) 00:17:07.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.598 =================================================================================================================== 00:17:07.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75962' 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75962 00:17:07.598 [2024-07-13 03:06:13.956136] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:07.598 03:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75962 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.unf3Fg6ZTR 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.unf3Fg6ZTR 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.unf3Fg6ZTR 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.unf3Fg6ZTR' 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75995 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75995 /var/tmp/bdevperf.sock 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75995 ']' 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.537 03:06:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.537 [2024-07-13 03:06:15.021602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:08.537 [2024-07-13 03:06:15.021778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75995 ] 00:17:08.796 [2024-07-13 03:06:15.192723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.056 [2024-07-13 03:06:15.359818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.056 [2024-07-13 03:06:15.517043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:09.624 03:06:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.624 03:06:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:09.624 03:06:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.unf3Fg6ZTR 00:17:09.884 [2024-07-13 03:06:16.142479] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.884 [2024-07-13 03:06:16.142700] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:09.884 [2024-07-13 03:06:16.156364] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:09.884 [2024-07-13 03:06:16.156423] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:09.884 [2024-07-13 03:06:16.156501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:09.884 [2024-07-13 03:06:16.157132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:09.884 [2024-07-13 03:06:16.158088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:09.884 [2024-07-13 03:06:16.159080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:09.884 [2024-07-13 03:06:16.159130] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:09.884 [2024-07-13 03:06:16.159147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:09.884 request: 00:17:09.884 { 00:17:09.884 "name": "TLSTEST", 00:17:09.884 "trtype": "tcp", 00:17:09.884 "traddr": "10.0.0.2", 00:17:09.884 "adrfam": "ipv4", 00:17:09.884 "trsvcid": "4420", 00:17:09.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:09.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.884 "prchk_reftag": false, 00:17:09.884 "prchk_guard": false, 00:17:09.884 "hdgst": false, 00:17:09.884 "ddgst": false, 00:17:09.884 "psk": "/tmp/tmp.unf3Fg6ZTR", 00:17:09.884 "method": "bdev_nvme_attach_controller", 00:17:09.884 "req_id": 1 00:17:09.884 } 00:17:09.884 Got JSON-RPC error response 00:17:09.884 response: 00:17:09.884 { 00:17:09.884 "code": -5, 00:17:09.884 "message": "Input/output error" 00:17:09.884 } 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 75995 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75995 ']' 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75995 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75995 00:17:09.884 killing process with pid 75995 00:17:09.884 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.884 00:17:09.884 Latency(us) 00:17:09.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.884 =================================================================================================================== 00:17:09.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75995' 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75995 00:17:09.884 [2024-07-13 03:06:16.209910] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:09.884 03:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75995 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76025 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76025 /var/tmp/bdevperf.sock 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76025 ']' 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.821 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.822 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.822 03:06:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.822 [2024-07-13 03:06:17.166893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:10.822 [2024-07-13 03:06:17.167114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76025 ] 00:17:11.080 [2024-07-13 03:06:17.324412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.080 [2024-07-13 03:06:17.487165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.338 [2024-07-13 03:06:17.651531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:11.905 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.905 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:11.905 03:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:11.905 [2024-07-13 03:06:18.316688] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.905 [2024-07-13 03:06:18.318580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:17:11.905 [2024-07-13 03:06:18.319568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.905 [2024-07-13 03:06:18.319619] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.905 [2024-07-13 03:06:18.319637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.905 request: 00:17:11.905 { 00:17:11.905 "name": "TLSTEST", 00:17:11.905 "trtype": "tcp", 00:17:11.905 "traddr": "10.0.0.2", 00:17:11.905 "adrfam": "ipv4", 00:17:11.905 "trsvcid": "4420", 00:17:11.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.905 "prchk_reftag": false, 00:17:11.905 "prchk_guard": false, 00:17:11.905 "hdgst": false, 00:17:11.905 "ddgst": false, 00:17:11.905 "method": "bdev_nvme_attach_controller", 00:17:11.905 "req_id": 1 00:17:11.905 } 00:17:11.905 Got JSON-RPC error response 00:17:11.905 response: 00:17:11.905 { 00:17:11.905 "code": -5, 00:17:11.905 "message": "Input/output error" 00:17:11.905 } 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76025 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76025 ']' 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76025 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76025 00:17:11.906 killing process with pid 76025 00:17:11.906 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.906 00:17:11.906 Latency(us) 00:17:11.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.906 =================================================================================================================== 00:17:11.906 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76025' 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76025 00:17:11.906 03:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76025 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 75546 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75546 ']' 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75546 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75546 00:17:13.279 killing process with pid 75546 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75546' 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75546 00:17:13.279 [2024-07-13 03:06:19.419117] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:13.279 03:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75546 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.213 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.oWtksKmxih 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.oWtksKmxih 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76081 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76081 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76081 ']' 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.214 03:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.471 [2024-07-13 03:06:20.751495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:14.471 [2024-07-13 03:06:20.751693] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.471 [2024-07-13 03:06:20.926870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.729 [2024-07-13 03:06:21.100196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.729 [2024-07-13 03:06:21.100293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.729 [2024-07-13 03:06:21.100309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.729 [2024-07-13 03:06:21.100323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.729 [2024-07-13 03:06:21.100334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.729 [2024-07-13 03:06:21.100371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.987 [2024-07-13 03:06:21.265629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oWtksKmxih 00:17:15.245 03:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.504 [2024-07-13 03:06:21.904332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.504 03:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:15.762 03:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.020 [2024-07-13 03:06:22.436504] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.020 [2024-07-13 03:06:22.436805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.020 03:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.279 malloc0 00:17:16.280 03:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.542 03:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:16.800 [2024-07-13 03:06:23.168376] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oWtksKmxih 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oWtksKmxih' 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76136 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76136 /var/tmp/bdevperf.sock 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76136 ']' 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.800 03:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.800 [2024-07-13 03:06:23.279689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:16.800 [2024-07-13 03:06:23.279848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76136 ] 00:17:17.071 [2024-07-13 03:06:23.450011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.343 [2024-07-13 03:06:23.684392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.601 [2024-07-13 03:06:23.850782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:17.861 03:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.861 03:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:17.861 03:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:17.861 [2024-07-13 03:06:24.345245] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.861 [2024-07-13 03:06:24.345395] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:18.120 TLSTESTn1 00:17:18.120 03:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:18.120 Running I/O for 10 seconds... 00:17:28.096 00:17:28.096 Latency(us) 00:17:28.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.096 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.096 Verification LBA range: start 0x0 length 0x2000 00:17:28.096 TLSTESTn1 : 10.03 3030.87 11.84 0.00 0.00 42126.76 8579.26 25856.93 00:17:28.096 =================================================================================================================== 00:17:28.096 Total : 3030.87 11.84 0.00 0.00 42126.76 8579.26 25856.93 00:17:28.096 0 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 76136 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76136 ']' 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76136 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.357 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76136 00:17:28.357 killing process with pid 76136 00:17:28.357 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.357 00:17:28.357 Latency(us) 00:17:28.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.358 =================================================================================================================== 00:17:28.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.358 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:28.358 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:28.358 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76136' 00:17:28.358 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76136 00:17:28.358 [2024-07-13 03:06:34.629049] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:28.358 03:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76136 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.oWtksKmxih 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oWtksKmxih 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oWtksKmxih 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oWtksKmxih 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.oWtksKmxih' 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76277 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76277 /var/tmp/bdevperf.sock 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76277 ']' 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.297 03:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.297 [2024-07-13 03:06:35.705531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:29.297 [2024-07-13 03:06:35.705972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76277 ] 00:17:29.556 [2024-07-13 03:06:35.866647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.556 [2024-07-13 03:06:36.034034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.816 [2024-07-13 03:06:36.205141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:30.385 [2024-07-13 03:06:36.826109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.385 [2024-07-13 03:06:36.826485] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:30.385 [2024-07-13 03:06:36.826630] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.oWtksKmxih 00:17:30.385 request: 00:17:30.385 { 00:17:30.385 "name": "TLSTEST", 00:17:30.385 "trtype": "tcp", 00:17:30.385 "traddr": "10.0.0.2", 00:17:30.385 "adrfam": "ipv4", 00:17:30.385 "trsvcid": "4420", 00:17:30.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.385 "prchk_reftag": false, 00:17:30.385 "prchk_guard": false, 00:17:30.385 "hdgst": false, 00:17:30.385 "ddgst": false, 00:17:30.385 "psk": "/tmp/tmp.oWtksKmxih", 00:17:30.385 "method": "bdev_nvme_attach_controller", 00:17:30.385 "req_id": 1 00:17:30.385 } 00:17:30.385 Got JSON-RPC error response 00:17:30.385 response: 00:17:30.385 { 00:17:30.385 "code": -1, 00:17:30.385 "message": "Operation not permitted" 00:17:30.385 } 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76277 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76277 ']' 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76277 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76277 00:17:30.385 killing process with pid 76277 00:17:30.385 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.385 00:17:30.385 Latency(us) 00:17:30.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.385 =================================================================================================================== 00:17:30.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76277' 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76277 00:17:30.385 03:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76277 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 76081 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76081 ']' 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76081 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76081 00:17:31.765 killing process with pid 76081 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76081' 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76081 00:17:31.765 [2024-07-13 03:06:37.922710] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:31.765 03:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76081 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76328 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76328 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76328 ']' 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.703 03:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.703 [2024-07-13 03:06:39.165503] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:32.703 [2024-07-13 03:06:39.165876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.962 [2024-07-13 03:06:39.328304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.221 [2024-07-13 03:06:39.492550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.221 [2024-07-13 03:06:39.492892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.221 [2024-07-13 03:06:39.493090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.221 [2024-07-13 03:06:39.493390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.221 [2024-07-13 03:06:39.493433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.221 [2024-07-13 03:06:39.493573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.221 [2024-07-13 03:06:39.655713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oWtksKmxih 00:17:33.790 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.049 [2024-07-13 03:06:40.299321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.049 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.308 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.308 [2024-07-13 03:06:40.775436] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.308 [2024-07-13 03:06:40.775696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.308 03:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.876 malloc0 00:17:34.876 03:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.876 03:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:35.136 [2024-07-13 03:06:41.518797] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:35.136 [2024-07-13 03:06:41.518875] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:35.136 [2024-07-13 03:06:41.519197] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:35.136 request: 00:17:35.136 { 00:17:35.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.136 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.136 "psk": "/tmp/tmp.oWtksKmxih", 00:17:35.136 "method": "nvmf_subsystem_add_host", 00:17:35.136 "req_id": 1 00:17:35.136 } 00:17:35.136 Got JSON-RPC error response 00:17:35.136 response: 00:17:35.136 { 00:17:35.136 "code": -32603, 00:17:35.136 "message": "Internal error" 00:17:35.136 } 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 76328 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76328 ']' 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76328 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76328 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.136 killing process with pid 76328 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76328' 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76328 00:17:35.136 03:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76328 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.oWtksKmxih 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76403 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.513 03:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76403 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76403 ']' 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.514 03:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.514 [2024-07-13 03:06:42.800968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:36.514 [2024-07-13 03:06:42.801162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.514 [2024-07-13 03:06:42.975586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.772 [2024-07-13 03:06:43.144333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.772 [2024-07-13 03:06:43.144410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.772 [2024-07-13 03:06:43.144425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.772 [2024-07-13 03:06:43.144438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.772 [2024-07-13 03:06:43.144448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.772 [2024-07-13 03:06:43.144484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.030 [2024-07-13 03:06:43.316781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oWtksKmxih 00:17:37.289 03:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:37.549 [2024-07-13 03:06:43.910819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.549 03:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.807 03:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:38.066 [2024-07-13 03:06:44.350932] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.066 [2024-07-13 03:06:44.351252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.066 03:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:38.326 malloc0 00:17:38.326 03:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.585 03:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:38.845 [2024-07-13 03:06:45.132901] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:38.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=76452 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 76452 /var/tmp/bdevperf.sock 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76452 ']' 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.845 03:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.845 [2024-07-13 03:06:45.237232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:38.845 [2024-07-13 03:06:45.237716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76452 ] 00:17:39.105 [2024-07-13 03:06:45.399297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.105 [2024-07-13 03:06:45.571091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.363 [2024-07-13 03:06:45.740923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.932 03:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.932 03:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:39.932 03:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:40.191 [2024-07-13 03:06:46.430285] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.191 [2024-07-13 03:06:46.430452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.191 TLSTESTn1 00:17:40.191 03:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:40.451 03:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:40.451 "subsystems": [ 00:17:40.451 { 00:17:40.451 "subsystem": "keyring", 00:17:40.451 "config": [] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "iobuf", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "iobuf_set_options", 00:17:40.451 "params": { 00:17:40.451 "small_pool_count": 8192, 00:17:40.451 "large_pool_count": 1024, 00:17:40.451 "small_bufsize": 8192, 00:17:40.451 "large_bufsize": 135168 00:17:40.451 } 00:17:40.451 } 00:17:40.451 ] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "sock", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "sock_set_default_impl", 00:17:40.451 "params": { 00:17:40.451 "impl_name": "uring" 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "sock_impl_set_options", 00:17:40.451 "params": { 00:17:40.451 "impl_name": "ssl", 00:17:40.451 "recv_buf_size": 4096, 00:17:40.451 "send_buf_size": 4096, 00:17:40.451 "enable_recv_pipe": true, 00:17:40.451 "enable_quickack": false, 00:17:40.451 "enable_placement_id": 0, 00:17:40.451 "enable_zerocopy_send_server": true, 00:17:40.451 "enable_zerocopy_send_client": false, 00:17:40.451 "zerocopy_threshold": 0, 00:17:40.451 "tls_version": 0, 00:17:40.451 "enable_ktls": false 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "sock_impl_set_options", 00:17:40.451 "params": { 00:17:40.451 "impl_name": "posix", 00:17:40.451 "recv_buf_size": 2097152, 00:17:40.451 "send_buf_size": 2097152, 00:17:40.451 "enable_recv_pipe": true, 00:17:40.451 "enable_quickack": false, 00:17:40.451 "enable_placement_id": 0, 00:17:40.451 "enable_zerocopy_send_server": true, 00:17:40.451 "enable_zerocopy_send_client": false, 00:17:40.451 "zerocopy_threshold": 0, 00:17:40.451 "tls_version": 0, 00:17:40.451 "enable_ktls": false 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "sock_impl_set_options", 00:17:40.451 "params": { 00:17:40.451 "impl_name": "uring", 00:17:40.451 "recv_buf_size": 2097152, 00:17:40.451 "send_buf_size": 2097152, 00:17:40.451 "enable_recv_pipe": true, 00:17:40.451 "enable_quickack": false, 00:17:40.451 "enable_placement_id": 0, 00:17:40.451 "enable_zerocopy_send_server": false, 00:17:40.451 "enable_zerocopy_send_client": false, 00:17:40.451 "zerocopy_threshold": 0, 00:17:40.451 "tls_version": 0, 00:17:40.451 "enable_ktls": false 00:17:40.451 } 00:17:40.451 } 00:17:40.451 ] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "vmd", 00:17:40.451 "config": [] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "accel", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "accel_set_options", 00:17:40.451 "params": { 00:17:40.451 "small_cache_size": 128, 00:17:40.451 "large_cache_size": 16, 00:17:40.451 "task_count": 2048, 00:17:40.451 "sequence_count": 2048, 00:17:40.451 "buf_count": 2048 00:17:40.451 } 00:17:40.451 } 00:17:40.451 ] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "bdev", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "bdev_set_options", 00:17:40.451 "params": { 00:17:40.451 "bdev_io_pool_size": 65535, 00:17:40.451 "bdev_io_cache_size": 256, 00:17:40.451 "bdev_auto_examine": true, 00:17:40.451 "iobuf_small_cache_size": 128, 00:17:40.451 "iobuf_large_cache_size": 16 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_raid_set_options", 00:17:40.451 "params": { 00:17:40.451 "process_window_size_kb": 1024 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_iscsi_set_options", 00:17:40.451 "params": { 00:17:40.451 "timeout_sec": 30 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_nvme_set_options", 00:17:40.451 "params": { 00:17:40.451 "action_on_timeout": "none", 00:17:40.451 "timeout_us": 0, 00:17:40.451 "timeout_admin_us": 0, 00:17:40.451 "keep_alive_timeout_ms": 10000, 00:17:40.451 "arbitration_burst": 0, 00:17:40.451 "low_priority_weight": 0, 00:17:40.451 "medium_priority_weight": 0, 00:17:40.451 "high_priority_weight": 0, 00:17:40.451 "nvme_adminq_poll_period_us": 10000, 00:17:40.451 "nvme_ioq_poll_period_us": 0, 00:17:40.451 "io_queue_requests": 0, 00:17:40.451 "delay_cmd_submit": true, 00:17:40.451 "transport_retry_count": 4, 00:17:40.451 "bdev_retry_count": 3, 00:17:40.451 "transport_ack_timeout": 0, 00:17:40.451 "ctrlr_loss_timeout_sec": 0, 00:17:40.451 "reconnect_delay_sec": 0, 00:17:40.451 "fast_io_fail_timeout_sec": 0, 00:17:40.451 "disable_auto_failback": false, 00:17:40.451 "generate_uuids": false, 00:17:40.451 "transport_tos": 0, 00:17:40.451 "nvme_error_stat": false, 00:17:40.451 "rdma_srq_size": 0, 00:17:40.451 "io_path_stat": false, 00:17:40.451 "allow_accel_sequence": false, 00:17:40.451 "rdma_max_cq_size": 0, 00:17:40.451 "rdma_cm_event_timeout_ms": 0, 00:17:40.451 "dhchap_digests": [ 00:17:40.451 "sha256", 00:17:40.451 "sha384", 00:17:40.451 "sha512" 00:17:40.451 ], 00:17:40.451 "dhchap_dhgroups": [ 00:17:40.451 "null", 00:17:40.451 "ffdhe2048", 00:17:40.451 "ffdhe3072", 00:17:40.451 "ffdhe4096", 00:17:40.451 "ffdhe6144", 00:17:40.451 "ffdhe8192" 00:17:40.451 ] 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_nvme_set_hotplug", 00:17:40.451 "params": { 00:17:40.451 "period_us": 100000, 00:17:40.451 "enable": false 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_malloc_create", 00:17:40.451 "params": { 00:17:40.451 "name": "malloc0", 00:17:40.451 "num_blocks": 8192, 00:17:40.451 "block_size": 4096, 00:17:40.451 "physical_block_size": 4096, 00:17:40.451 "uuid": "2fddc0ee-e223-4e93-aa02-920f12d45189", 00:17:40.451 "optimal_io_boundary": 0 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "bdev_wait_for_examine" 00:17:40.451 } 00:17:40.451 ] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "nbd", 00:17:40.451 "config": [] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "scheduler", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "framework_set_scheduler", 00:17:40.451 "params": { 00:17:40.451 "name": "static" 00:17:40.451 } 00:17:40.451 } 00:17:40.451 ] 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "subsystem": "nvmf", 00:17:40.451 "config": [ 00:17:40.451 { 00:17:40.451 "method": "nvmf_set_config", 00:17:40.451 "params": { 00:17:40.451 "discovery_filter": "match_any", 00:17:40.451 "admin_cmd_passthru": { 00:17:40.451 "identify_ctrlr": false 00:17:40.451 } 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "nvmf_set_max_subsystems", 00:17:40.451 "params": { 00:17:40.451 "max_subsystems": 1024 00:17:40.451 } 00:17:40.451 }, 00:17:40.451 { 00:17:40.451 "method": "nvmf_set_crdt", 00:17:40.451 "params": { 00:17:40.452 "crdt1": 0, 00:17:40.452 "crdt2": 0, 00:17:40.452 "crdt3": 0 00:17:40.452 } 00:17:40.452 }, 00:17:40.452 { 00:17:40.452 "method": "nvmf_create_transport", 00:17:40.452 "params": { 00:17:40.452 "trtype": "TCP", 00:17:40.452 "max_queue_depth": 128, 00:17:40.452 "max_io_qpairs_per_ctrlr": 127, 00:17:40.452 "in_capsule_data_size": 4096, 00:17:40.452 "max_io_size": 131072, 00:17:40.452 "io_unit_size": 131072, 00:17:40.452 "max_aq_depth": 128, 00:17:40.452 "num_shared_buffers": 511, 00:17:40.452 "buf_cache_size": 4294967295, 00:17:40.452 "dif_insert_or_strip": false, 00:17:40.452 "zcopy": false, 00:17:40.452 "c2h_success": false, 00:17:40.452 "sock_priority": 0, 00:17:40.452 "abort_timeout_sec": 1, 00:17:40.452 "ack_timeout": 0, 00:17:40.452 "data_wr_pool_size": 0 00:17:40.452 } 00:17:40.452 }, 00:17:40.452 { 00:17:40.452 "method": "nvmf_create_subsystem", 00:17:40.452 "params": { 00:17:40.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.452 "allow_any_host": false, 00:17:40.452 "serial_number": "SPDK00000000000001", 00:17:40.452 "model_number": "SPDK bdev Controller", 00:17:40.452 "max_namespaces": 10, 00:17:40.452 "min_cntlid": 1, 00:17:40.452 "max_cntlid": 65519, 00:17:40.452 "ana_reporting": false 00:17:40.452 } 00:17:40.452 }, 00:17:40.452 { 00:17:40.452 "method": "nvmf_subsystem_add_host", 00:17:40.452 "params": { 00:17:40.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.452 "host": "nqn.2016-06.io.spdk:host1", 00:17:40.452 "psk": "/tmp/tmp.oWtksKmxih" 00:17:40.452 } 00:17:40.452 }, 00:17:40.452 { 00:17:40.452 "method": "nvmf_subsystem_add_ns", 00:17:40.452 "params": { 00:17:40.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.452 "namespace": { 00:17:40.452 "nsid": 1, 00:17:40.452 "bdev_name": "malloc0", 00:17:40.452 "nguid": "2FDDC0EEE2234E93AA02920F12D45189", 00:17:40.452 "uuid": "2fddc0ee-e223-4e93-aa02-920f12d45189", 00:17:40.452 "no_auto_visible": false 00:17:40.452 } 00:17:40.452 } 00:17:40.452 }, 00:17:40.452 { 00:17:40.452 "method": "nvmf_subsystem_add_listener", 00:17:40.452 "params": { 00:17:40.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.452 "listen_address": { 00:17:40.452 "trtype": "TCP", 00:17:40.452 "adrfam": "IPv4", 00:17:40.452 "traddr": "10.0.0.2", 00:17:40.452 "trsvcid": "4420" 00:17:40.452 }, 00:17:40.452 "secure_channel": true 00:17:40.452 } 00:17:40.452 } 00:17:40.452 ] 00:17:40.452 } 00:17:40.452 ] 00:17:40.452 }' 00:17:40.452 03:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:41.020 03:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:41.020 "subsystems": [ 00:17:41.020 { 00:17:41.020 "subsystem": "keyring", 00:17:41.020 "config": [] 00:17:41.020 }, 00:17:41.020 { 00:17:41.020 "subsystem": "iobuf", 00:17:41.020 "config": [ 00:17:41.020 { 00:17:41.020 "method": "iobuf_set_options", 00:17:41.020 "params": { 00:17:41.020 "small_pool_count": 8192, 00:17:41.020 "large_pool_count": 1024, 00:17:41.020 "small_bufsize": 8192, 00:17:41.020 "large_bufsize": 135168 00:17:41.020 } 00:17:41.020 } 00:17:41.020 ] 00:17:41.020 }, 00:17:41.020 { 00:17:41.020 "subsystem": "sock", 00:17:41.020 "config": [ 00:17:41.020 { 00:17:41.020 "method": "sock_set_default_impl", 00:17:41.020 "params": { 00:17:41.020 "impl_name": "uring" 00:17:41.020 } 00:17:41.020 }, 00:17:41.020 { 00:17:41.020 "method": "sock_impl_set_options", 00:17:41.020 "params": { 00:17:41.020 "impl_name": "ssl", 00:17:41.020 "recv_buf_size": 4096, 00:17:41.020 "send_buf_size": 4096, 00:17:41.020 "enable_recv_pipe": true, 00:17:41.020 "enable_quickack": false, 00:17:41.020 "enable_placement_id": 0, 00:17:41.020 "enable_zerocopy_send_server": true, 00:17:41.020 "enable_zerocopy_send_client": false, 00:17:41.020 "zerocopy_threshold": 0, 00:17:41.020 "tls_version": 0, 00:17:41.020 "enable_ktls": false 00:17:41.020 } 00:17:41.020 }, 00:17:41.020 { 00:17:41.020 "method": "sock_impl_set_options", 00:17:41.020 "params": { 00:17:41.020 "impl_name": "posix", 00:17:41.020 "recv_buf_size": 2097152, 00:17:41.020 "send_buf_size": 2097152, 00:17:41.020 "enable_recv_pipe": true, 00:17:41.020 "enable_quickack": false, 00:17:41.020 "enable_placement_id": 0, 00:17:41.020 "enable_zerocopy_send_server": true, 00:17:41.020 "enable_zerocopy_send_client": false, 00:17:41.020 "zerocopy_threshold": 0, 00:17:41.020 "tls_version": 0, 00:17:41.020 "enable_ktls": false 00:17:41.020 } 00:17:41.020 }, 00:17:41.020 { 00:17:41.020 "method": "sock_impl_set_options", 00:17:41.020 "params": { 00:17:41.020 "impl_name": "uring", 00:17:41.020 "recv_buf_size": 2097152, 00:17:41.020 "send_buf_size": 2097152, 00:17:41.020 "enable_recv_pipe": true, 00:17:41.020 "enable_quickack": false, 00:17:41.021 "enable_placement_id": 0, 00:17:41.021 "enable_zerocopy_send_server": false, 00:17:41.021 "enable_zerocopy_send_client": false, 00:17:41.021 "zerocopy_threshold": 0, 00:17:41.021 "tls_version": 0, 00:17:41.021 "enable_ktls": false 00:17:41.021 } 00:17:41.021 } 00:17:41.021 ] 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "subsystem": "vmd", 00:17:41.021 "config": [] 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "subsystem": "accel", 00:17:41.021 "config": [ 00:17:41.021 { 00:17:41.021 "method": "accel_set_options", 00:17:41.021 "params": { 00:17:41.021 "small_cache_size": 128, 00:17:41.021 "large_cache_size": 16, 00:17:41.021 "task_count": 2048, 00:17:41.021 "sequence_count": 2048, 00:17:41.021 "buf_count": 2048 00:17:41.021 } 00:17:41.021 } 00:17:41.021 ] 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "subsystem": "bdev", 00:17:41.021 "config": [ 00:17:41.021 { 00:17:41.021 "method": "bdev_set_options", 00:17:41.021 "params": { 00:17:41.021 "bdev_io_pool_size": 65535, 00:17:41.021 "bdev_io_cache_size": 256, 00:17:41.021 "bdev_auto_examine": true, 00:17:41.021 "iobuf_small_cache_size": 128, 00:17:41.021 "iobuf_large_cache_size": 16 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_raid_set_options", 00:17:41.021 "params": { 00:17:41.021 "process_window_size_kb": 1024 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_iscsi_set_options", 00:17:41.021 "params": { 00:17:41.021 "timeout_sec": 30 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_nvme_set_options", 00:17:41.021 "params": { 00:17:41.021 "action_on_timeout": "none", 00:17:41.021 "timeout_us": 0, 00:17:41.021 "timeout_admin_us": 0, 00:17:41.021 "keep_alive_timeout_ms": 10000, 00:17:41.021 "arbitration_burst": 0, 00:17:41.021 "low_priority_weight": 0, 00:17:41.021 "medium_priority_weight": 0, 00:17:41.021 "high_priority_weight": 0, 00:17:41.021 "nvme_adminq_poll_period_us": 10000, 00:17:41.021 "nvme_ioq_poll_period_us": 0, 00:17:41.021 "io_queue_requests": 512, 00:17:41.021 "delay_cmd_submit": true, 00:17:41.021 "transport_retry_count": 4, 00:17:41.021 "bdev_retry_count": 3, 00:17:41.021 "transport_ack_timeout": 0, 00:17:41.021 "ctrlr_loss_timeout_sec": 0, 00:17:41.021 "reconnect_delay_sec": 0, 00:17:41.021 "fast_io_fail_timeout_sec": 0, 00:17:41.021 "disable_auto_failback": false, 00:17:41.021 "generate_uuids": false, 00:17:41.021 "transport_tos": 0, 00:17:41.021 "nvme_error_stat": false, 00:17:41.021 "rdma_srq_size": 0, 00:17:41.021 "io_path_stat": false, 00:17:41.021 "allow_accel_sequence": false, 00:17:41.021 "rdma_max_cq_size": 0, 00:17:41.021 "rdma_cm_event_timeout_ms": 0, 00:17:41.021 "dhchap_digests": [ 00:17:41.021 "sha256", 00:17:41.021 "sha384", 00:17:41.021 "sha512" 00:17:41.021 ], 00:17:41.021 "dhchap_dhgroups": [ 00:17:41.021 "null", 00:17:41.021 "ffdhe2048", 00:17:41.021 "ffdhe3072", 00:17:41.021 "ffdhe4096", 00:17:41.021 "ffdhe6144", 00:17:41.021 "ffdhe8192" 00:17:41.021 ] 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_nvme_attach_controller", 00:17:41.021 "params": { 00:17:41.021 "name": "TLSTEST", 00:17:41.021 "trtype": "TCP", 00:17:41.021 "adrfam": "IPv4", 00:17:41.021 "traddr": "10.0.0.2", 00:17:41.021 "trsvcid": "4420", 00:17:41.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.021 "prchk_reftag": false, 00:17:41.021 "prchk_guard": false, 00:17:41.021 "ctrlr_loss_timeout_sec": 0, 00:17:41.021 "reconnect_delay_sec": 0, 00:17:41.021 "fast_io_fail_timeout_sec": 0, 00:17:41.021 "psk": "/tmp/tmp.oWtksKmxih", 00:17:41.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.021 "hdgst": false, 00:17:41.021 "ddgst": false 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_nvme_set_hotplug", 00:17:41.021 "params": { 00:17:41.021 "period_us": 100000, 00:17:41.021 "enable": false 00:17:41.021 } 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "method": "bdev_wait_for_examine" 00:17:41.021 } 00:17:41.021 ] 00:17:41.021 }, 00:17:41.021 { 00:17:41.021 "subsystem": "nbd", 00:17:41.021 "config": [] 00:17:41.021 } 00:17:41.021 ] 00:17:41.021 }' 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 76452 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76452 ']' 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76452 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76452 00:17:41.021 killing process with pid 76452 00:17:41.021 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.021 00:17:41.021 Latency(us) 00:17:41.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.021 =================================================================================================================== 00:17:41.021 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76452' 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76452 00:17:41.021 [2024-07-13 03:06:47.261743] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:41.021 03:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76452 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 76403 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76403 ']' 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76403 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76403 00:17:41.957 killing process with pid 76403 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76403' 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76403 00:17:41.957 [2024-07-13 03:06:48.235404] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:41.957 03:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76403 00:17:42.893 03:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:42.893 03:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:42.893 "subsystems": [ 00:17:42.893 { 00:17:42.893 "subsystem": "keyring", 00:17:42.893 "config": [] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "iobuf", 00:17:42.893 "config": [ 00:17:42.893 { 00:17:42.893 "method": "iobuf_set_options", 00:17:42.893 "params": { 00:17:42.893 "small_pool_count": 8192, 00:17:42.893 "large_pool_count": 1024, 00:17:42.893 "small_bufsize": 8192, 00:17:42.893 "large_bufsize": 135168 00:17:42.893 } 00:17:42.893 } 00:17:42.893 ] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "sock", 00:17:42.893 "config": [ 00:17:42.893 { 00:17:42.893 "method": "sock_set_default_impl", 00:17:42.893 "params": { 00:17:42.893 "impl_name": "uring" 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "sock_impl_set_options", 00:17:42.893 "params": { 00:17:42.893 "impl_name": "ssl", 00:17:42.893 "recv_buf_size": 4096, 00:17:42.893 "send_buf_size": 4096, 00:17:42.893 "enable_recv_pipe": true, 00:17:42.893 "enable_quickack": false, 00:17:42.893 "enable_placement_id": 0, 00:17:42.893 "enable_zerocopy_send_server": true, 00:17:42.893 "enable_zerocopy_send_client": false, 00:17:42.893 "zerocopy_threshold": 0, 00:17:42.893 "tls_version": 0, 00:17:42.893 "enable_ktls": false 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "sock_impl_set_options", 00:17:42.893 "params": { 00:17:42.893 "impl_name": "posix", 00:17:42.893 "recv_buf_size": 2097152, 00:17:42.893 "send_buf_size": 2097152, 00:17:42.893 "enable_recv_pipe": true, 00:17:42.893 "enable_quickack": false, 00:17:42.893 "enable_placement_id": 0, 00:17:42.893 "enable_zerocopy_send_server": true, 00:17:42.893 "enable_zerocopy_send_client": false, 00:17:42.893 "zerocopy_threshold": 0, 00:17:42.893 "tls_version": 0, 00:17:42.893 "enable_ktls": false 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "sock_impl_set_options", 00:17:42.893 "params": { 00:17:42.893 "impl_name": "uring", 00:17:42.893 "recv_buf_size": 2097152, 00:17:42.893 "send_buf_size": 2097152, 00:17:42.893 "enable_recv_pipe": true, 00:17:42.893 "enable_quickack": false, 00:17:42.893 "enable_placement_id": 0, 00:17:42.893 "enable_zerocopy_send_server": false, 00:17:42.893 "enable_zerocopy_send_client": false, 00:17:42.893 "zerocopy_threshold": 0, 00:17:42.893 "tls_version": 0, 00:17:42.893 "enable_ktls": false 00:17:42.893 } 00:17:42.893 } 00:17:42.893 ] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "vmd", 00:17:42.893 "config": [] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "accel", 00:17:42.893 "config": [ 00:17:42.893 { 00:17:42.893 "method": "accel_set_options", 00:17:42.893 "params": { 00:17:42.893 "small_cache_size": 128, 00:17:42.893 "large_cache_size": 16, 00:17:42.893 "task_count": 2048, 00:17:42.893 "sequence_count": 2048, 00:17:42.893 "buf_count": 2048 00:17:42.893 } 00:17:42.893 } 00:17:42.893 ] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "bdev", 00:17:42.893 "config": [ 00:17:42.893 { 00:17:42.893 "method": "bdev_set_options", 00:17:42.893 "params": { 00:17:42.893 "bdev_io_pool_size": 65535, 00:17:42.893 "bdev_io_cache_size": 256, 00:17:42.893 "bdev_auto_examine": true, 00:17:42.893 "iobuf_small_cache_size": 128, 00:17:42.893 "iobuf_large_cache_size": 16 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_raid_set_options", 00:17:42.893 "params": { 00:17:42.893 "process_window_size_kb": 1024 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_iscsi_set_options", 00:17:42.893 "params": { 00:17:42.893 "timeout_sec": 30 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_nvme_set_options", 00:17:42.893 "params": { 00:17:42.893 "action_on_timeout": "none", 00:17:42.893 "timeout_us": 0, 00:17:42.893 "timeout_admin_us": 0, 00:17:42.893 "keep_alive_timeout_ms": 10000, 00:17:42.893 "arbitration_burst": 0, 00:17:42.893 "low_priority_weight": 0, 00:17:42.893 "medium_priority_weight": 0, 00:17:42.893 "high_priority_weight": 0, 00:17:42.893 "nvme_adminq_poll_period_us": 10000, 00:17:42.893 "nvme_ioq_poll_period_us": 0, 00:17:42.893 "io_queue_requests": 0, 00:17:42.893 "delay_cmd_submit": true, 00:17:42.893 "transport_retry_count": 4, 00:17:42.893 "bdev_retry_count": 3, 00:17:42.893 "transport_ack_timeout": 0, 00:17:42.893 "ctrlr_loss_timeout_sec": 0, 00:17:42.893 "reconnect_delay_sec": 0, 00:17:42.893 "fast_io_fail_timeout_sec": 0, 00:17:42.893 "disable_auto_failback": false, 00:17:42.893 "generate_uuids": false, 00:17:42.893 "transport_tos": 0, 00:17:42.893 "nvme_error_stat": false, 00:17:42.893 "rdma_srq_size": 0, 00:17:42.893 "io_path_stat": false, 00:17:42.893 "allow_accel_sequence": false, 00:17:42.893 "rdma_max_cq_size": 0, 00:17:42.893 "rdma_cm_event_timeout_ms": 0, 00:17:42.893 "dhchap_digests": [ 00:17:42.893 "sha256", 00:17:42.893 "sha384", 00:17:42.893 "sha512" 00:17:42.893 ], 00:17:42.893 "dhchap_dhgroups": [ 00:17:42.893 "null", 00:17:42.893 "ffdhe2048", 00:17:42.893 "ffdhe3072", 00:17:42.893 "ffdhe4096", 00:17:42.893 "ffdhe6144", 00:17:42.893 "ffdhe8192" 00:17:42.893 ] 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_nvme_set_hotplug", 00:17:42.893 "params": { 00:17:42.893 "period_us": 100000, 00:17:42.893 "enable": false 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_malloc_create", 00:17:42.893 "params": { 00:17:42.893 "name": "malloc0", 00:17:42.893 "num_blocks": 8192, 00:17:42.893 "block_size": 4096, 00:17:42.893 "physical_block_size": 4096, 00:17:42.893 "uuid": "2fddc0ee-e223-4e93-aa02-920f12d45189", 00:17:42.893 "optimal_io_boundary": 0 00:17:42.893 } 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "method": "bdev_wait_for_examine" 00:17:42.893 } 00:17:42.893 ] 00:17:42.893 }, 00:17:42.893 { 00:17:42.893 "subsystem": "nbd", 00:17:42.894 "config": [] 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "subsystem": "scheduler", 00:17:42.894 "config": [ 00:17:42.894 { 00:17:42.894 "method": "framework_set_scheduler", 00:17:42.894 "params": { 00:17:42.894 "name": "static" 00:17:42.894 } 00:17:42.894 } 00:17:42.894 ] 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "subsystem": "nvmf", 00:17:42.894 "config": [ 00:17:42.894 { 00:17:42.894 "method": "nvmf_set_config", 00:17:42.894 "params": { 00:17:42.894 "discovery_filter": "match_any", 00:17:42.894 "admin_cmd_passthru": { 00:17:42.894 "identify_ctrlr": false 00:17:42.894 } 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_set_max_subsystems", 00:17:42.894 "params": { 00:17:42.894 "max_subsystems": 1024 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_set_crdt", 00:17:42.894 "params": { 00:17:42.894 "crdt1": 0, 00:17:42.894 "crdt2": 0, 00:17:42.894 "crdt3": 0 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_create_transport", 00:17:42.894 "params": { 00:17:42.894 "trtype": "TCP", 00:17:42.894 "max_queue_depth": 128, 00:17:42.894 "max_io_qpairs_per_ctrlr": 127, 00:17:42.894 "in_capsule_data_size": 4096, 00:17:42.894 "max_io_size": 131072, 00:17:42.894 "io_unit_size": 131072, 00:17:42.894 "max_aq_depth": 128, 00:17:42.894 "num_shared_buffers": 511, 00:17:42.894 "buf_cache_size": 4294967295, 00:17:42.894 "dif_insert_or_strip": false, 00:17:42.894 "zcopy": false, 00:17:42.894 "c2h_success": false, 00:17:42.894 "sock_priority": 0, 00:17:42.894 "abort_timeout_sec": 1, 00:17:42.894 "ack_timeout": 0, 00:17:42.894 "data_wr_pool_size": 0 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_create_subsystem", 00:17:42.894 "params": { 00:17:42.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.894 "allow_any_host": false, 00:17:42.894 "serial_number": "SPDK00000000000001", 00:17:42.894 "model_number": "SPDK bdev Controller", 00:17:42.894 "max_namespaces": 10, 00:17:42.894 "min_cntlid": 1, 00:17:42.894 "max_cntlid": 65519, 00:17:42.894 "ana_reporting": false 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_subsystem_add_host", 00:17:42.894 "params": { 00:17:42.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.894 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.894 "psk": "/tmp/tmp.oWtksKmxih" 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_subsystem_add_ns", 00:17:42.894 "params": { 00:17:42.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.894 "namespace": { 00:17:42.894 "nsid": 1, 00:17:42.894 "bdev_name": "malloc0", 00:17:42.894 "nguid": "2FDDC0EEE2234E93AA02920F12D45189", 00:17:42.894 "uuid": "2fddc0ee-e223-4e93-aa02-920f12d45189", 00:17:42.894 "no_auto_visible": false 00:17:42.894 } 00:17:42.894 } 00:17:42.894 }, 00:17:42.894 { 00:17:42.894 "method": "nvmf_subsystem_add_listener", 00:17:42.894 "params": { 00:17:42.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.894 "listen_address": { 00:17:42.894 "trtype": "TCP", 00:17:42.894 "adrfam": "IPv4", 00:17:42.894 "traddr": "10.0.0.2", 00:17:42.894 "trsvcid": "4420" 00:17:42.894 }, 00:17:42.894 "secure_channel": true 00:17:42.894 } 00:17:42.894 } 00:17:42.894 ] 00:17:42.894 } 00:17:42.894 ] 00:17:42.894 }' 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76524 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76524 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76524 ']' 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.894 03:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.152 [2024-07-13 03:06:49.465163] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:43.152 [2024-07-13 03:06:49.465345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.152 [2024-07-13 03:06:49.637949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.410 [2024-07-13 03:06:49.808466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.410 [2024-07-13 03:06:49.808558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.410 [2024-07-13 03:06:49.808575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.410 [2024-07-13 03:06:49.808588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.410 [2024-07-13 03:06:49.808599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.410 [2024-07-13 03:06:49.808750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.669 [2024-07-13 03:06:50.092002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:43.928 [2024-07-13 03:06:50.234137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.928 [2024-07-13 03:06:50.250077] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.928 [2024-07-13 03:06:50.266060] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.928 [2024-07-13 03:06:50.274078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=76552 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 76552 /var/tmp/bdevperf.sock 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76552 ']' 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.928 03:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:43.928 "subsystems": [ 00:17:43.928 { 00:17:43.928 "subsystem": "keyring", 00:17:43.928 "config": [] 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "subsystem": "iobuf", 00:17:43.928 "config": [ 00:17:43.928 { 00:17:43.928 "method": "iobuf_set_options", 00:17:43.928 "params": { 00:17:43.928 "small_pool_count": 8192, 00:17:43.928 "large_pool_count": 1024, 00:17:43.928 "small_bufsize": 8192, 00:17:43.928 "large_bufsize": 135168 00:17:43.928 } 00:17:43.928 } 00:17:43.928 ] 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "subsystem": "sock", 00:17:43.928 "config": [ 00:17:43.928 { 00:17:43.928 "method": "sock_set_default_impl", 00:17:43.928 "params": { 00:17:43.928 "impl_name": "uring" 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "method": "sock_impl_set_options", 00:17:43.928 "params": { 00:17:43.928 "impl_name": "ssl", 00:17:43.928 "recv_buf_size": 4096, 00:17:43.928 "send_buf_size": 4096, 00:17:43.928 "enable_recv_pipe": true, 00:17:43.928 "enable_quickack": false, 00:17:43.928 "enable_placement_id": 0, 00:17:43.928 "enable_zerocopy_send_server": true, 00:17:43.928 "enable_zerocopy_send_client": false, 00:17:43.928 "zerocopy_threshold": 0, 00:17:43.928 "tls_version": 0, 00:17:43.928 "enable_ktls": false 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "method": "sock_impl_set_options", 00:17:43.928 "params": { 00:17:43.928 "impl_name": "posix", 00:17:43.928 "recv_buf_size": 2097152, 00:17:43.928 "send_buf_size": 2097152, 00:17:43.928 "enable_recv_pipe": true, 00:17:43.928 "enable_quickack": false, 00:17:43.928 "enable_placement_id": 0, 00:17:43.928 "enable_zerocopy_send_server": true, 00:17:43.928 "enable_zerocopy_send_client": false, 00:17:43.928 "zerocopy_threshold": 0, 00:17:43.928 "tls_version": 0, 00:17:43.928 "enable_ktls": false 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "method": "sock_impl_set_options", 00:17:43.928 "params": { 00:17:43.928 "impl_name": "uring", 00:17:43.928 "recv_buf_size": 2097152, 00:17:43.928 "send_buf_size": 2097152, 00:17:43.928 "enable_recv_pipe": true, 00:17:43.928 "enable_quickack": false, 00:17:43.928 "enable_placement_id": 0, 00:17:43.928 "enable_zerocopy_send_server": false, 00:17:43.928 "enable_zerocopy_send_client": false, 00:17:43.928 "zerocopy_threshold": 0, 00:17:43.928 "tls_version": 0, 00:17:43.928 "enable_ktls": false 00:17:43.928 } 00:17:43.928 } 00:17:43.928 ] 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "subsystem": "vmd", 00:17:43.928 "config": [] 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "subsystem": "accel", 00:17:43.928 "config": [ 00:17:43.928 { 00:17:43.928 "method": "accel_set_options", 00:17:43.928 "params": { 00:17:43.928 "small_cache_size": 128, 00:17:43.928 "large_cache_size": 16, 00:17:43.928 "task_count": 2048, 00:17:43.928 "sequence_count": 2048, 00:17:43.928 "buf_count": 2048 00:17:43.928 } 00:17:43.928 } 00:17:43.928 ] 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "subsystem": "bdev", 00:17:43.928 "config": [ 00:17:43.928 { 00:17:43.928 "method": "bdev_set_options", 00:17:43.928 "params": { 00:17:43.928 "bdev_io_pool_size": 65535, 00:17:43.928 "bdev_io_cache_size": 256, 00:17:43.928 "bdev_auto_examine": true, 00:17:43.928 "iobuf_small_cache_size": 128, 00:17:43.928 "iobuf_large_cache_size": 16 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "method": "bdev_raid_set_options", 00:17:43.928 "params": { 00:17:43.928 "process_window_size_kb": 1024 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.928 "method": "bdev_iscsi_set_options", 00:17:43.928 "params": { 00:17:43.928 "timeout_sec": 30 00:17:43.928 } 00:17:43.928 }, 00:17:43.928 { 00:17:43.929 "method": "bdev_nvme_set_options", 00:17:43.929 "params": { 00:17:43.929 "action_on_timeout": "none", 00:17:43.929 "timeout_us": 0, 00:17:43.929 "timeout_admin_us": 0, 00:17:43.929 "keep_alive_timeout_ms": 10000, 00:17:43.929 "arbitration_burst": 0, 00:17:43.929 "low_priority_weight": 0, 00:17:43.929 "medium_priority_weight": 0, 00:17:43.929 "high_priority_weight": 0, 00:17:43.929 "nvme_adminq_poll_period_us": 10000, 00:17:43.929 "nvme_ioq_poll_period_us": 0, 00:17:43.929 "io_queue_requests": 512, 00:17:43.929 "delay_cmd_submit": true, 00:17:43.929 "transport_retry_count": 4, 00:17:43.929 "bdev_retry_count": 3, 00:17:43.929 "transport_ack_timeout": 0, 00:17:43.929 "ctrlr_loss_timeout_sec": 0, 00:17:43.929 "reconnect_delay_sec": 0, 00:17:43.929 "fast_io_fail_timeout_sec": 0, 00:17:43.929 "disable_auto_failback": false, 00:17:43.929 "generate_uuids": false, 00:17:43.929 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.929 "transport_tos": 0, 00:17:43.929 "nvme_error_stat": false, 00:17:43.929 "rdma_srq_size": 0, 00:17:43.929 "io_path_stat": false, 00:17:43.929 "allow_accel_sequence": false, 00:17:43.929 "rdma_max_cq_size": 0, 00:17:43.929 "rdma_cm_event_timeout_ms": 0, 00:17:43.929 "dhchap_digests": [ 00:17:43.929 "sha256", 00:17:43.929 "sha384", 00:17:43.929 "sha512" 00:17:43.929 ], 00:17:43.929 "dhchap_dhgroups": [ 00:17:43.929 "null", 00:17:43.929 "ffdhe2048", 00:17:43.929 "ffdhe3072", 00:17:43.929 "ffdhe4096", 00:17:43.929 "ffdhe6144", 00:17:43.929 "ffdhe8192" 00:17:43.929 ] 00:17:43.929 } 00:17:43.929 }, 00:17:43.929 { 00:17:43.929 "method": "bdev_nvme_attach_controller", 00:17:43.929 "params": { 00:17:43.929 "name": "TLSTEST", 00:17:43.929 "trtype": "TCP", 00:17:43.929 "adrfam": "IPv4", 00:17:43.929 "traddr": "10.0.0.2", 00:17:43.929 "trsvcid": "4420", 00:17:43.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.929 "prchk_reftag": false, 00:17:43.929 "prchk_guard": false, 00:17:43.929 "ctrlr_loss_timeout_sec": 0, 00:17:43.929 "reconnect_delay_sec": 0, 00:17:43.929 "fast_io_fail_timeout_sec": 0, 00:17:43.929 "psk": "/tmp/tmp.oWtksKmxih", 00:17:43.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.929 "hdgst": false, 00:17:43.929 "ddgst": false 00:17:43.929 } 00:17:43.929 }, 00:17:43.929 { 00:17:43.929 "method": "bdev_nvme_set_hotplug", 00:17:43.929 "params": { 00:17:43.929 "period_us": 100000, 00:17:43.929 "enable": false 00:17:43.929 } 00:17:43.929 }, 00:17:43.929 { 00:17:43.929 "method": "bdev_wait_for_examine" 00:17:43.929 } 00:17:43.929 ] 00:17:43.929 }, 00:17:43.929 { 00:17:43.929 "subsystem": "nbd", 00:17:43.929 "config": [] 00:17:43.929 } 00:17:43.929 ] 00:17:43.929 }' 00:17:43.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.929 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.929 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.929 03:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.188 [2024-07-13 03:06:50.504491] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:44.188 [2024-07-13 03:06:50.504674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:17:44.188 [2024-07-13 03:06:50.678338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.446 [2024-07-13 03:06:50.902405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.705 [2024-07-13 03:06:51.151800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.965 [2024-07-13 03:06:51.247874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.965 [2024-07-13 03:06:51.248080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.965 03:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.965 03:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:44.965 03:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.224 Running I/O for 10 seconds... 00:17:55.202 00:17:55.202 Latency(us) 00:17:55.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.202 Verification LBA range: start 0x0 length 0x2000 00:17:55.202 TLSTESTn1 : 10.02 3117.26 12.18 0.00 0.00 40973.02 8102.63 49807.36 00:17:55.202 =================================================================================================================== 00:17:55.202 Total : 3117.26 12.18 0.00 0.00 40973.02 8102.63 49807.36 00:17:55.202 0 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 76552 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76552 ']' 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76552 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76552 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.202 killing process with pid 76552 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76552' 00:17:55.202 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.202 00:17:55.202 Latency(us) 00:17:55.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.202 =================================================================================================================== 00:17:55.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76552 00:17:55.202 [2024-07-13 03:07:01.644369] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.202 03:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76552 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 76524 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76524 ']' 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76524 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76524 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:56.578 killing process with pid 76524 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76524' 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76524 00:17:56.578 [2024-07-13 03:07:02.735437] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:56.578 03:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76524 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76709 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76709 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76709 ']' 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.514 03:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.514 [2024-07-13 03:07:03.973197] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:57.514 [2024-07-13 03:07:03.973386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.793 [2024-07-13 03:07:04.140168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.090 [2024-07-13 03:07:04.309026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.090 [2024-07-13 03:07:04.309118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.090 [2024-07-13 03:07:04.309134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.090 [2024-07-13 03:07:04.309149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.090 [2024-07-13 03:07:04.309161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.090 [2024-07-13 03:07:04.309202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.090 [2024-07-13 03:07:04.474680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.oWtksKmxih 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.oWtksKmxih 00:17:58.656 03:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:58.656 [2024-07-13 03:07:05.122021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.656 03:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:58.915 03:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.174 [2024-07-13 03:07:05.550134] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.174 [2024-07-13 03:07:05.550462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.174 03:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:59.434 malloc0 00:17:59.434 03:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:59.693 03:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oWtksKmxih 00:17:59.952 [2024-07-13 03:07:06.287369] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:59.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=76758 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 76758 /var/tmp/bdevperf.sock 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76758 ']' 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.952 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.953 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.953 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.953 03:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.953 [2024-07-13 03:07:06.385861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:59.953 [2024-07-13 03:07:06.386050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76758 ] 00:18:00.211 [2024-07-13 03:07:06.548859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.471 [2024-07-13 03:07:06.756409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.471 [2024-07-13 03:07:06.919119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:01.039 03:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.039 03:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:01.039 03:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oWtksKmxih 00:18:01.039 03:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:01.299 [2024-07-13 03:07:07.683110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.299 nvme0n1 00:18:01.299 03:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.558 Running I/O for 1 seconds... 00:18:02.494 00:18:02.494 Latency(us) 00:18:02.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.494 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.494 Verification LBA range: start 0x0 length 0x2000 00:18:02.494 nvme0n1 : 1.03 2986.75 11.67 0.00 0.00 42308.43 12690.15 29312.47 00:18:02.494 =================================================================================================================== 00:18:02.494 Total : 2986.75 11.67 0.00 0.00 42308.43 12690.15 29312.47 00:18:02.494 0 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 76758 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76758 ']' 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76758 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76758 00:18:02.494 killing process with pid 76758 00:18:02.494 Received shutdown signal, test time was about 1.000000 seconds 00:18:02.494 00:18:02.494 Latency(us) 00:18:02.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.494 =================================================================================================================== 00:18:02.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76758' 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76758 00:18:02.494 03:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76758 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 76709 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76709 ']' 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76709 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.431 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76709 00:18:03.690 killing process with pid 76709 00:18:03.690 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.690 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.690 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76709' 00:18:03.690 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76709 00:18:03.690 [2024-07-13 03:07:09.935656] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:03.690 03:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76709 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76828 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76828 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76828 ']' 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.623 03:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.905 [2024-07-13 03:07:11.164522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:04.906 [2024-07-13 03:07:11.164684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.906 [2024-07-13 03:07:11.324456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.164 [2024-07-13 03:07:11.473800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.164 [2024-07-13 03:07:11.473892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.164 [2024-07-13 03:07:11.473920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.164 [2024-07-13 03:07:11.473935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.164 [2024-07-13 03:07:11.473946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.164 [2024-07-13 03:07:11.473982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.164 [2024-07-13 03:07:11.630021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.731 [2024-07-13 03:07:12.084053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.731 malloc0 00:18:05.731 [2024-07-13 03:07:12.134641] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.731 [2024-07-13 03:07:12.134969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=76860 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 76860 /var/tmp/bdevperf.sock 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76860 ']' 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.731 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.732 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.732 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.732 03:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.990 [2024-07-13 03:07:12.251469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:05.990 [2024-07-13 03:07:12.251662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76860 ] 00:18:05.990 [2024-07-13 03:07:12.407940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.248 [2024-07-13 03:07:12.568373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.248 [2024-07-13 03:07:12.727605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:06.824 03:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.824 03:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.824 03:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oWtksKmxih 00:18:07.082 03:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:07.340 [2024-07-13 03:07:13.594848] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.340 nvme0n1 00:18:07.340 03:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:07.340 Running I/O for 1 seconds... 00:18:08.714 00:18:08.714 Latency(us) 00:18:08.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.714 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:08.714 Verification LBA range: start 0x0 length 0x2000 00:18:08.714 nvme0n1 : 1.04 3067.85 11.98 0.00 0.00 41108.73 11141.12 27405.96 00:18:08.714 =================================================================================================================== 00:18:08.714 Total : 3067.85 11.98 0.00 0.00 41108.73 11141.12 27405.96 00:18:08.714 0 00:18:08.714 03:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:08.714 03:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.714 03:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.714 03:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.714 03:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:08.714 "subsystems": [ 00:18:08.714 { 00:18:08.714 "subsystem": "keyring", 00:18:08.714 "config": [ 00:18:08.714 { 00:18:08.714 "method": "keyring_file_add_key", 00:18:08.714 "params": { 00:18:08.714 "name": "key0", 00:18:08.714 "path": "/tmp/tmp.oWtksKmxih" 00:18:08.714 } 00:18:08.714 } 00:18:08.714 ] 00:18:08.714 }, 00:18:08.714 { 00:18:08.714 "subsystem": "iobuf", 00:18:08.714 "config": [ 00:18:08.714 { 00:18:08.714 "method": "iobuf_set_options", 00:18:08.714 "params": { 00:18:08.714 "small_pool_count": 8192, 00:18:08.715 "large_pool_count": 1024, 00:18:08.715 "small_bufsize": 8192, 00:18:08.715 "large_bufsize": 135168 00:18:08.715 } 00:18:08.715 } 00:18:08.715 ] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "sock", 00:18:08.715 "config": [ 00:18:08.715 { 00:18:08.715 "method": "sock_set_default_impl", 00:18:08.715 "params": { 00:18:08.715 "impl_name": "uring" 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "sock_impl_set_options", 00:18:08.715 "params": { 00:18:08.715 "impl_name": "ssl", 00:18:08.715 "recv_buf_size": 4096, 00:18:08.715 "send_buf_size": 4096, 00:18:08.715 "enable_recv_pipe": true, 00:18:08.715 "enable_quickack": false, 00:18:08.715 "enable_placement_id": 0, 00:18:08.715 "enable_zerocopy_send_server": true, 00:18:08.715 "enable_zerocopy_send_client": false, 00:18:08.715 "zerocopy_threshold": 0, 00:18:08.715 "tls_version": 0, 00:18:08.715 "enable_ktls": false 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "sock_impl_set_options", 00:18:08.715 "params": { 00:18:08.715 "impl_name": "posix", 00:18:08.715 "recv_buf_size": 2097152, 00:18:08.715 "send_buf_size": 2097152, 00:18:08.715 "enable_recv_pipe": true, 00:18:08.715 "enable_quickack": false, 00:18:08.715 "enable_placement_id": 0, 00:18:08.715 "enable_zerocopy_send_server": true, 00:18:08.715 "enable_zerocopy_send_client": false, 00:18:08.715 "zerocopy_threshold": 0, 00:18:08.715 "tls_version": 0, 00:18:08.715 "enable_ktls": false 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "sock_impl_set_options", 00:18:08.715 "params": { 00:18:08.715 "impl_name": "uring", 00:18:08.715 "recv_buf_size": 2097152, 00:18:08.715 "send_buf_size": 2097152, 00:18:08.715 "enable_recv_pipe": true, 00:18:08.715 "enable_quickack": false, 00:18:08.715 "enable_placement_id": 0, 00:18:08.715 "enable_zerocopy_send_server": false, 00:18:08.715 "enable_zerocopy_send_client": false, 00:18:08.715 "zerocopy_threshold": 0, 00:18:08.715 "tls_version": 0, 00:18:08.715 "enable_ktls": false 00:18:08.715 } 00:18:08.715 } 00:18:08.715 ] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "vmd", 00:18:08.715 "config": [] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "accel", 00:18:08.715 "config": [ 00:18:08.715 { 00:18:08.715 "method": "accel_set_options", 00:18:08.715 "params": { 00:18:08.715 "small_cache_size": 128, 00:18:08.715 "large_cache_size": 16, 00:18:08.715 "task_count": 2048, 00:18:08.715 "sequence_count": 2048, 00:18:08.715 "buf_count": 2048 00:18:08.715 } 00:18:08.715 } 00:18:08.715 ] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "bdev", 00:18:08.715 "config": [ 00:18:08.715 { 00:18:08.715 "method": "bdev_set_options", 00:18:08.715 "params": { 00:18:08.715 "bdev_io_pool_size": 65535, 00:18:08.715 "bdev_io_cache_size": 256, 00:18:08.715 "bdev_auto_examine": true, 00:18:08.715 "iobuf_small_cache_size": 128, 00:18:08.715 "iobuf_large_cache_size": 16 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_raid_set_options", 00:18:08.715 "params": { 00:18:08.715 "process_window_size_kb": 1024 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_iscsi_set_options", 00:18:08.715 "params": { 00:18:08.715 "timeout_sec": 30 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_nvme_set_options", 00:18:08.715 "params": { 00:18:08.715 "action_on_timeout": "none", 00:18:08.715 "timeout_us": 0, 00:18:08.715 "timeout_admin_us": 0, 00:18:08.715 "keep_alive_timeout_ms": 10000, 00:18:08.715 "arbitration_burst": 0, 00:18:08.715 "low_priority_weight": 0, 00:18:08.715 "medium_priority_weight": 0, 00:18:08.715 "high_priority_weight": 0, 00:18:08.715 "nvme_adminq_poll_period_us": 10000, 00:18:08.715 "nvme_ioq_poll_period_us": 0, 00:18:08.715 "io_queue_requests": 0, 00:18:08.715 "delay_cmd_submit": true, 00:18:08.715 "transport_retry_count": 4, 00:18:08.715 "bdev_retry_count": 3, 00:18:08.715 "transport_ack_timeout": 0, 00:18:08.715 "ctrlr_loss_timeout_sec": 0, 00:18:08.715 "reconnect_delay_sec": 0, 00:18:08.715 "fast_io_fail_timeout_sec": 0, 00:18:08.715 "disable_auto_failback": false, 00:18:08.715 "generate_uuids": false, 00:18:08.715 "transport_tos": 0, 00:18:08.715 "nvme_error_stat": false, 00:18:08.715 "rdma_srq_size": 0, 00:18:08.715 "io_path_stat": false, 00:18:08.715 "allow_accel_sequence": false, 00:18:08.715 "rdma_max_cq_size": 0, 00:18:08.715 "rdma_cm_event_timeout_ms": 0, 00:18:08.715 "dhchap_digests": [ 00:18:08.715 "sha256", 00:18:08.715 "sha384", 00:18:08.715 "sha512" 00:18:08.715 ], 00:18:08.715 "dhchap_dhgroups": [ 00:18:08.715 "null", 00:18:08.715 "ffdhe2048", 00:18:08.715 "ffdhe3072", 00:18:08.715 "ffdhe4096", 00:18:08.715 "ffdhe6144", 00:18:08.715 "ffdhe8192" 00:18:08.715 ] 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_nvme_set_hotplug", 00:18:08.715 "params": { 00:18:08.715 "period_us": 100000, 00:18:08.715 "enable": false 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_malloc_create", 00:18:08.715 "params": { 00:18:08.715 "name": "malloc0", 00:18:08.715 "num_blocks": 8192, 00:18:08.715 "block_size": 4096, 00:18:08.715 "physical_block_size": 4096, 00:18:08.715 "uuid": "089538e7-083b-430c-8585-317eedf50f42", 00:18:08.715 "optimal_io_boundary": 0 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "bdev_wait_for_examine" 00:18:08.715 } 00:18:08.715 ] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "nbd", 00:18:08.715 "config": [] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "scheduler", 00:18:08.715 "config": [ 00:18:08.715 { 00:18:08.715 "method": "framework_set_scheduler", 00:18:08.715 "params": { 00:18:08.715 "name": "static" 00:18:08.715 } 00:18:08.715 } 00:18:08.715 ] 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "subsystem": "nvmf", 00:18:08.715 "config": [ 00:18:08.715 { 00:18:08.715 "method": "nvmf_set_config", 00:18:08.715 "params": { 00:18:08.715 "discovery_filter": "match_any", 00:18:08.715 "admin_cmd_passthru": { 00:18:08.715 "identify_ctrlr": false 00:18:08.715 } 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_set_max_subsystems", 00:18:08.715 "params": { 00:18:08.715 "max_subsystems": 1024 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_set_crdt", 00:18:08.715 "params": { 00:18:08.715 "crdt1": 0, 00:18:08.715 "crdt2": 0, 00:18:08.715 "crdt3": 0 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_create_transport", 00:18:08.715 "params": { 00:18:08.715 "trtype": "TCP", 00:18:08.715 "max_queue_depth": 128, 00:18:08.715 "max_io_qpairs_per_ctrlr": 127, 00:18:08.715 "in_capsule_data_size": 4096, 00:18:08.715 "max_io_size": 131072, 00:18:08.715 "io_unit_size": 131072, 00:18:08.715 "max_aq_depth": 128, 00:18:08.715 "num_shared_buffers": 511, 00:18:08.715 "buf_cache_size": 4294967295, 00:18:08.715 "dif_insert_or_strip": false, 00:18:08.715 "zcopy": false, 00:18:08.715 "c2h_success": false, 00:18:08.715 "sock_priority": 0, 00:18:08.715 "abort_timeout_sec": 1, 00:18:08.715 "ack_timeout": 0, 00:18:08.715 "data_wr_pool_size": 0 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_create_subsystem", 00:18:08.715 "params": { 00:18:08.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.715 "allow_any_host": false, 00:18:08.715 "serial_number": "00000000000000000000", 00:18:08.715 "model_number": "SPDK bdev Controller", 00:18:08.715 "max_namespaces": 32, 00:18:08.715 "min_cntlid": 1, 00:18:08.715 "max_cntlid": 65519, 00:18:08.715 "ana_reporting": false 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_subsystem_add_host", 00:18:08.715 "params": { 00:18:08.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.715 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.715 "psk": "key0" 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.715 "method": "nvmf_subsystem_add_ns", 00:18:08.715 "params": { 00:18:08.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.715 "namespace": { 00:18:08.715 "nsid": 1, 00:18:08.715 "bdev_name": "malloc0", 00:18:08.715 "nguid": "089538E7083B430C8585317EEDF50F42", 00:18:08.715 "uuid": "089538e7-083b-430c-8585-317eedf50f42", 00:18:08.715 "no_auto_visible": false 00:18:08.715 } 00:18:08.715 } 00:18:08.715 }, 00:18:08.715 { 00:18:08.716 "method": "nvmf_subsystem_add_listener", 00:18:08.716 "params": { 00:18:08.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.716 "listen_address": { 00:18:08.716 "trtype": "TCP", 00:18:08.716 "adrfam": "IPv4", 00:18:08.716 "traddr": "10.0.0.2", 00:18:08.716 "trsvcid": "4420" 00:18:08.716 }, 00:18:08.716 "secure_channel": true 00:18:08.716 } 00:18:08.716 } 00:18:08.716 ] 00:18:08.716 } 00:18:08.716 ] 00:18:08.716 }' 00:18:08.716 03:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:08.974 03:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:08.974 "subsystems": [ 00:18:08.974 { 00:18:08.974 "subsystem": "keyring", 00:18:08.974 "config": [ 00:18:08.974 { 00:18:08.974 "method": "keyring_file_add_key", 00:18:08.974 "params": { 00:18:08.974 "name": "key0", 00:18:08.974 "path": "/tmp/tmp.oWtksKmxih" 00:18:08.974 } 00:18:08.974 } 00:18:08.974 ] 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "subsystem": "iobuf", 00:18:08.974 "config": [ 00:18:08.974 { 00:18:08.974 "method": "iobuf_set_options", 00:18:08.974 "params": { 00:18:08.974 "small_pool_count": 8192, 00:18:08.974 "large_pool_count": 1024, 00:18:08.974 "small_bufsize": 8192, 00:18:08.974 "large_bufsize": 135168 00:18:08.974 } 00:18:08.974 } 00:18:08.974 ] 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "subsystem": "sock", 00:18:08.974 "config": [ 00:18:08.974 { 00:18:08.974 "method": "sock_set_default_impl", 00:18:08.974 "params": { 00:18:08.974 "impl_name": "uring" 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "sock_impl_set_options", 00:18:08.974 "params": { 00:18:08.974 "impl_name": "ssl", 00:18:08.974 "recv_buf_size": 4096, 00:18:08.974 "send_buf_size": 4096, 00:18:08.974 "enable_recv_pipe": true, 00:18:08.974 "enable_quickack": false, 00:18:08.974 "enable_placement_id": 0, 00:18:08.974 "enable_zerocopy_send_server": true, 00:18:08.974 "enable_zerocopy_send_client": false, 00:18:08.974 "zerocopy_threshold": 0, 00:18:08.974 "tls_version": 0, 00:18:08.974 "enable_ktls": false 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "sock_impl_set_options", 00:18:08.974 "params": { 00:18:08.974 "impl_name": "posix", 00:18:08.974 "recv_buf_size": 2097152, 00:18:08.974 "send_buf_size": 2097152, 00:18:08.974 "enable_recv_pipe": true, 00:18:08.974 "enable_quickack": false, 00:18:08.974 "enable_placement_id": 0, 00:18:08.974 "enable_zerocopy_send_server": true, 00:18:08.974 "enable_zerocopy_send_client": false, 00:18:08.974 "zerocopy_threshold": 0, 00:18:08.974 "tls_version": 0, 00:18:08.974 "enable_ktls": false 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "sock_impl_set_options", 00:18:08.974 "params": { 00:18:08.974 "impl_name": "uring", 00:18:08.974 "recv_buf_size": 2097152, 00:18:08.974 "send_buf_size": 2097152, 00:18:08.974 "enable_recv_pipe": true, 00:18:08.974 "enable_quickack": false, 00:18:08.974 "enable_placement_id": 0, 00:18:08.974 "enable_zerocopy_send_server": false, 00:18:08.974 "enable_zerocopy_send_client": false, 00:18:08.974 "zerocopy_threshold": 0, 00:18:08.974 "tls_version": 0, 00:18:08.974 "enable_ktls": false 00:18:08.974 } 00:18:08.974 } 00:18:08.974 ] 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "subsystem": "vmd", 00:18:08.974 "config": [] 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "subsystem": "accel", 00:18:08.974 "config": [ 00:18:08.974 { 00:18:08.974 "method": "accel_set_options", 00:18:08.974 "params": { 00:18:08.974 "small_cache_size": 128, 00:18:08.974 "large_cache_size": 16, 00:18:08.974 "task_count": 2048, 00:18:08.974 "sequence_count": 2048, 00:18:08.974 "buf_count": 2048 00:18:08.974 } 00:18:08.974 } 00:18:08.974 ] 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "subsystem": "bdev", 00:18:08.974 "config": [ 00:18:08.974 { 00:18:08.974 "method": "bdev_set_options", 00:18:08.974 "params": { 00:18:08.974 "bdev_io_pool_size": 65535, 00:18:08.974 "bdev_io_cache_size": 256, 00:18:08.974 "bdev_auto_examine": true, 00:18:08.974 "iobuf_small_cache_size": 128, 00:18:08.974 "iobuf_large_cache_size": 16 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_raid_set_options", 00:18:08.974 "params": { 00:18:08.974 "process_window_size_kb": 1024 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_iscsi_set_options", 00:18:08.974 "params": { 00:18:08.974 "timeout_sec": 30 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_nvme_set_options", 00:18:08.974 "params": { 00:18:08.974 "action_on_timeout": "none", 00:18:08.974 "timeout_us": 0, 00:18:08.974 "timeout_admin_us": 0, 00:18:08.974 "keep_alive_timeout_ms": 10000, 00:18:08.974 "arbitration_burst": 0, 00:18:08.974 "low_priority_weight": 0, 00:18:08.974 "medium_priority_weight": 0, 00:18:08.974 "high_priority_weight": 0, 00:18:08.974 "nvme_adminq_poll_period_us": 10000, 00:18:08.974 "nvme_ioq_poll_period_us": 0, 00:18:08.974 "io_queue_requests": 512, 00:18:08.974 "delay_cmd_submit": true, 00:18:08.974 "transport_retry_count": 4, 00:18:08.974 "bdev_retry_count": 3, 00:18:08.974 "transport_ack_timeout": 0, 00:18:08.974 "ctrlr_loss_timeout_sec": 0, 00:18:08.974 "reconnect_delay_sec": 0, 00:18:08.974 "fast_io_fail_timeout_sec": 0, 00:18:08.974 "disable_auto_failback": false, 00:18:08.974 "generate_uuids": false, 00:18:08.974 "transport_tos": 0, 00:18:08.974 "nvme_error_stat": false, 00:18:08.974 "rdma_srq_size": 0, 00:18:08.974 "io_path_stat": false, 00:18:08.974 "allow_accel_sequence": false, 00:18:08.974 "rdma_max_cq_size": 0, 00:18:08.974 "rdma_cm_event_timeout_ms": 0, 00:18:08.974 "dhchap_digests": [ 00:18:08.974 "sha256", 00:18:08.974 "sha384", 00:18:08.974 "sha512" 00:18:08.974 ], 00:18:08.974 "dhchap_dhgroups": [ 00:18:08.974 "null", 00:18:08.974 "ffdhe2048", 00:18:08.974 "ffdhe3072", 00:18:08.974 "ffdhe4096", 00:18:08.974 "ffdhe6144", 00:18:08.974 "ffdhe8192" 00:18:08.974 ] 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_nvme_attach_controller", 00:18:08.974 "params": { 00:18:08.974 "name": "nvme0", 00:18:08.974 "trtype": "TCP", 00:18:08.974 "adrfam": "IPv4", 00:18:08.974 "traddr": "10.0.0.2", 00:18:08.974 "trsvcid": "4420", 00:18:08.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.974 "prchk_reftag": false, 00:18:08.974 "prchk_guard": false, 00:18:08.974 "ctrlr_loss_timeout_sec": 0, 00:18:08.974 "reconnect_delay_sec": 0, 00:18:08.974 "fast_io_fail_timeout_sec": 0, 00:18:08.974 "psk": "key0", 00:18:08.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.974 "hdgst": false, 00:18:08.974 "ddgst": false 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_nvme_set_hotplug", 00:18:08.974 "params": { 00:18:08.974 "period_us": 100000, 00:18:08.974 "enable": false 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.974 "method": "bdev_enable_histogram", 00:18:08.974 "params": { 00:18:08.974 "name": "nvme0n1", 00:18:08.974 "enable": true 00:18:08.974 } 00:18:08.974 }, 00:18:08.974 { 00:18:08.975 "method": "bdev_wait_for_examine" 00:18:08.975 } 00:18:08.975 ] 00:18:08.975 }, 00:18:08.975 { 00:18:08.975 "subsystem": "nbd", 00:18:08.975 "config": [] 00:18:08.975 } 00:18:08.975 ] 00:18:08.975 }' 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 76860 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76860 ']' 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76860 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76860 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:08.975 killing process with pid 76860 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76860' 00:18:08.975 Received shutdown signal, test time was about 1.000000 seconds 00:18:08.975 00:18:08.975 Latency(us) 00:18:08.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.975 =================================================================================================================== 00:18:08.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76860 00:18:08.975 03:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76860 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 76828 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76828 ']' 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76828 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76828 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.910 killing process with pid 76828 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76828' 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76828 00:18:09.910 03:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76828 00:18:11.288 03:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:11.288 03:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.288 03:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:11.288 "subsystems": [ 00:18:11.288 { 00:18:11.288 "subsystem": "keyring", 00:18:11.288 "config": [ 00:18:11.288 { 00:18:11.288 "method": "keyring_file_add_key", 00:18:11.288 "params": { 00:18:11.288 "name": "key0", 00:18:11.288 "path": "/tmp/tmp.oWtksKmxih" 00:18:11.288 } 00:18:11.288 } 00:18:11.288 ] 00:18:11.288 }, 00:18:11.288 { 00:18:11.288 "subsystem": "iobuf", 00:18:11.288 "config": [ 00:18:11.288 { 00:18:11.288 "method": "iobuf_set_options", 00:18:11.288 "params": { 00:18:11.288 "small_pool_count": 8192, 00:18:11.288 "large_pool_count": 1024, 00:18:11.288 "small_bufsize": 8192, 00:18:11.288 "large_bufsize": 135168 00:18:11.288 } 00:18:11.288 } 00:18:11.288 ] 00:18:11.288 }, 00:18:11.288 { 00:18:11.288 "subsystem": "sock", 00:18:11.288 "config": [ 00:18:11.288 { 00:18:11.288 "method": "sock_set_default_impl", 00:18:11.288 "params": { 00:18:11.288 "impl_name": "uring" 00:18:11.288 } 00:18:11.288 }, 00:18:11.288 { 00:18:11.288 "method": "sock_impl_set_options", 00:18:11.288 "params": { 00:18:11.288 "impl_name": "ssl", 00:18:11.288 "recv_buf_size": 4096, 00:18:11.288 "send_buf_size": 4096, 00:18:11.288 "enable_recv_pipe": true, 00:18:11.288 "enable_quickack": false, 00:18:11.288 "enable_placement_id": 0, 00:18:11.288 "enable_zerocopy_send_server": true, 00:18:11.288 "enable_zerocopy_send_client": false, 00:18:11.288 "zerocopy_threshold": 0, 00:18:11.288 "tls_version": 0, 00:18:11.288 "enable_ktls": false 00:18:11.288 } 00:18:11.288 }, 00:18:11.288 { 00:18:11.288 "method": "sock_impl_set_options", 00:18:11.288 "params": { 00:18:11.288 "impl_name": "posix", 00:18:11.288 "recv_buf_size": 2097152, 00:18:11.288 "send_buf_size": 2097152, 00:18:11.288 "enable_recv_pipe": true, 00:18:11.288 "enable_quickack": false, 00:18:11.288 "enable_placement_id": 0, 00:18:11.288 "enable_zerocopy_send_server": true, 00:18:11.288 "enable_zerocopy_send_client": false, 00:18:11.288 "zerocopy_threshold": 0, 00:18:11.289 "tls_version": 0, 00:18:11.289 "enable_ktls": false 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "sock_impl_set_options", 00:18:11.289 "params": { 00:18:11.289 "impl_name": "uring", 00:18:11.289 "recv_buf_size": 2097152, 00:18:11.289 "send_buf_size": 2097152, 00:18:11.289 "enable_recv_pipe": true, 00:18:11.289 "enable_quickack": false, 00:18:11.289 "enable_placement_id": 0, 00:18:11.289 "enable_zerocopy_send_server": false, 00:18:11.289 "enable_zerocopy_send_client": false, 00:18:11.289 "zerocopy_threshold": 0, 00:18:11.289 "tls_version": 0, 00:18:11.289 "enable_ktls": false 00:18:11.289 } 00:18:11.289 } 00:18:11.289 ] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "vmd", 00:18:11.289 "config": [] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "accel", 00:18:11.289 "config": [ 00:18:11.289 { 00:18:11.289 "method": "accel_set_options", 00:18:11.289 "params": { 00:18:11.289 "small_cache_size": 128, 00:18:11.289 "large_cache_size": 16, 00:18:11.289 "task_count": 2048, 00:18:11.289 "sequence_count": 2048, 00:18:11.289 "buf_count": 2048 00:18:11.289 } 00:18:11.289 } 00:18:11.289 ] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "bdev", 00:18:11.289 "config": [ 00:18:11.289 { 00:18:11.289 "method": "bdev_set_options", 00:18:11.289 "params": { 00:18:11.289 "bdev_io_pool_size": 65535, 00:18:11.289 "bdev_io_cache_size": 256, 00:18:11.289 "bdev_auto_examine": true, 00:18:11.289 "iobuf_small_cache_size": 128, 00:18:11.289 "iobuf_large_cache_size": 16 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_raid_set_options", 00:18:11.289 "params": { 00:18:11.289 "process_window_size_kb": 1024 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_iscsi_set_options", 00:18:11.289 "params": { 00:18:11.289 "timeout_sec": 30 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_nvme_set_options", 00:18:11.289 "params": { 00:18:11.289 "action_on_timeout": "none", 00:18:11.289 "timeout_us": 0, 00:18:11.289 "timeout_admin_us": 0, 00:18:11.289 "keep_alive_timeout_ms": 10000, 00:18:11.289 "arbitration_burst": 0, 00:18:11.289 "low_priority_weight": 0, 00:18:11.289 "medium_priority_weight": 0, 00:18:11.289 "high_priority_weight": 0, 00:18:11.289 "nvme_adminq_poll_period_us": 10000, 00:18:11.289 "nvme_ioq_poll_period_us": 0, 00:18:11.289 "io_queue_requests": 0, 00:18:11.289 "delay_cmd_submit": true, 00:18:11.289 "transport_retry_count": 4, 00:18:11.289 "bdev_retry_count": 3, 00:18:11.289 "transport_ack_timeout": 0, 00:18:11.289 "ctrlr_loss_timeout_sec": 0, 00:18:11.289 "reconnect_delay_sec": 0, 00:18:11.289 "fast_io_fail_timeout_sec": 0, 00:18:11.289 "disable_auto_failback": false, 00:18:11.289 "generate_uuids": false, 00:18:11.289 "transport_tos": 0, 00:18:11.289 "nvme_error_stat": false, 00:18:11.289 "rdma_srq_size": 0, 00:18:11.289 "io_path_stat": false, 00:18:11.289 "allow_accel_sequence": false, 00:18:11.289 "rdma_max_cq_size": 0, 00:18:11.289 "rdma_cm_event_timeout_ms": 0, 00:18:11.289 "dhchap_digests": [ 00:18:11.289 "sha256", 00:18:11.289 "sha384", 00:18:11.289 "sha512" 00:18:11.289 ], 00:18:11.289 "dhchap_dhgroups": [ 00:18:11.289 "null", 00:18:11.289 "ffdhe2048", 00:18:11.289 "ffdhe3072", 00:18:11.289 "ffdhe4096", 00:18:11.289 "ffdhe6144", 00:18:11.289 "ffdhe8192" 00:18:11.289 ] 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_nvme_set_hotplug", 00:18:11.289 "params": { 00:18:11.289 "period_us": 100000, 00:18:11.289 "enable": false 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_malloc_create", 00:18:11.289 "params": { 00:18:11.289 "name": "malloc0", 00:18:11.289 "num_blocks": 8192, 00:18:11.289 "block_size": 4096, 00:18:11.289 "physical_block_size": 4096, 00:18:11.289 "uuid": "089538e7-083b-430c-8585-317eedf50f42", 00:18:11.289 "optimal_io_boundary": 0 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "bdev_wait_for_examine" 00:18:11.289 } 00:18:11.289 ] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "nbd", 00:18:11.289 "config": [] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "scheduler", 00:18:11.289 "config": [ 00:18:11.289 { 00:18:11.289 "method": "framework_set_scheduler", 00:18:11.289 "params": { 00:18:11.289 "name": "static" 00:18:11.289 } 00:18:11.289 } 00:18:11.289 ] 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "subsystem": "nvmf", 00:18:11.289 "config": [ 00:18:11.289 { 00:18:11.289 "method": "nvmf_set_config", 00:18:11.289 "params": { 00:18:11.289 "discovery_filter": "match_any", 00:18:11.289 "admin_cmd_passthru": { 00:18:11.289 "identify_ctrlr": false 00:18:11.289 } 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "nvmf_set_max_subsystems", 00:18:11.289 "params": { 00:18:11.289 "max_subsystems": 1024 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "nvmf_set_crdt", 00:18:11.289 "params": { 00:18:11.289 "crdt1": 0, 00:18:11.289 "crdt2": 0, 00:18:11.289 "crdt3": 0 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "nvmf_create_transport", 00:18:11.289 "params": { 00:18:11.289 "trtype": "TCP", 00:18:11.289 "max_queue_depth": 128, 00:18:11.289 "max_io_qpairs_per_ctrlr": 127, 00:18:11.289 "in_capsule_data_size": 4096, 00:18:11.289 "max_io_size": 131072, 00:18:11.289 "io_unit_size": 131072, 00:18:11.289 "max_aq_depth": 128, 00:18:11.289 "num_shared_buffers": 511, 00:18:11.289 "buf_cache_size": 4294967295, 00:18:11.289 "dif_insert_or_strip": false, 00:18:11.289 "zcopy": false, 00:18:11.289 "c2h_success": false, 00:18:11.289 "sock_priority": 0, 00:18:11.289 "abort_timeout_sec": 1, 00:18:11.289 "ack_timeout": 0, 00:18:11.289 "data_wr_pool_size": 0 00:18:11.289 } 00:18:11.289 }, 00:18:11.289 { 00:18:11.289 "method": "nvmf_create_subsystem", 00:18:11.289 "params": { 00:18:11.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.289 "allow_any_host": false, 00:18:11.289 "serial_number": "00000000000000000000", 00:18:11.289 "model_number": "SPDK bdev Controller", 00:18:11.289 "max_namespaces": 32, 00:18:11.289 "min_cntlid": 1, 00:18:11.289 "max_cntlid": 65519, 00:18:11.289 "ana_reporting": false 00:18:11.289 } 00:18:11.290 }, 00:18:11.290 { 00:18:11.290 "method": "nvmf_subsystem_add_host", 00:18:11.290 "params": { 00:18:11.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.290 "host": "nqn.2016-06.io.spdk:host1", 00:18:11.290 "psk": "key0" 00:18:11.290 } 00:18:11.290 }, 00:18:11.290 { 00:18:11.290 "method": "nvmf_subsystem_add_ns", 00:18:11.290 "params": { 00:18:11.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.290 "namespace": { 00:18:11.290 "nsid": 1, 00:18:11.290 "bdev_name": "malloc0", 00:18:11.290 "nguid": "089538E7083B430C8585317EEDF50F42", 00:18:11.290 "uuid": "089538e7-083b-430c-8585-317eedf50f42", 00:18:11.290 "no_auto_visible": false 00:18:11.290 } 00:18:11.290 } 00:18:11.290 }, 00:18:11.290 { 00:18:11.290 "method": "nvmf_subsystem_add_listener", 00:18:11.290 "params": { 00:18:11.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.290 "listen_address": { 00:18:11.290 "trtype": "TCP", 00:18:11.290 "adrfam": "IPv4", 00:18:11.290 "traddr": "10.0.0.2", 00:18:11.290 "trsvcid": "4420" 00:18:11.290 }, 00:18:11.290 "secure_channel": true 00:18:11.290 } 00:18:11.290 } 00:18:11.290 ] 00:18:11.290 } 00:18:11.290 ] 00:18:11.290 }' 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76934 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76934 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76934 ']' 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.290 03:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.290 [2024-07-13 03:07:17.584753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:11.290 [2024-07-13 03:07:17.585061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.290 [2024-07-13 03:07:17.765133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.549 [2024-07-13 03:07:17.949310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.549 [2024-07-13 03:07:17.949425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.549 [2024-07-13 03:07:17.949440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.549 [2024-07-13 03:07:17.949453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.549 [2024-07-13 03:07:17.949463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.549 [2024-07-13 03:07:17.949585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.808 [2024-07-13 03:07:18.232126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:12.066 [2024-07-13 03:07:18.382943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.066 [2024-07-13 03:07:18.414896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.066 [2024-07-13 03:07:18.426101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.066 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.066 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:12.066 03:07:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.066 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:12.066 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=76966 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 76966 /var/tmp/bdevperf.sock 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76966 ']' 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.326 03:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:12.326 "subsystems": [ 00:18:12.326 { 00:18:12.326 "subsystem": "keyring", 00:18:12.326 "config": [ 00:18:12.326 { 00:18:12.326 "method": "keyring_file_add_key", 00:18:12.326 "params": { 00:18:12.326 "name": "key0", 00:18:12.326 "path": "/tmp/tmp.oWtksKmxih" 00:18:12.326 } 00:18:12.326 } 00:18:12.326 ] 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "subsystem": "iobuf", 00:18:12.326 "config": [ 00:18:12.326 { 00:18:12.326 "method": "iobuf_set_options", 00:18:12.326 "params": { 00:18:12.326 "small_pool_count": 8192, 00:18:12.326 "large_pool_count": 1024, 00:18:12.326 "small_bufsize": 8192, 00:18:12.326 "large_bufsize": 135168 00:18:12.326 } 00:18:12.326 } 00:18:12.326 ] 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "subsystem": "sock", 00:18:12.326 "config": [ 00:18:12.326 { 00:18:12.326 "method": "sock_set_default_impl", 00:18:12.326 "params": { 00:18:12.326 "impl_name": "uring" 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "sock_impl_set_options", 00:18:12.326 "params": { 00:18:12.326 "impl_name": "ssl", 00:18:12.326 "recv_buf_size": 4096, 00:18:12.326 "send_buf_size": 4096, 00:18:12.326 "enable_recv_pipe": true, 00:18:12.326 "enable_quickack": false, 00:18:12.326 "enable_placement_id": 0, 00:18:12.326 "enable_zerocopy_send_server": true, 00:18:12.326 "enable_zerocopy_send_client": false, 00:18:12.326 "zerocopy_threshold": 0, 00:18:12.326 "tls_version": 0, 00:18:12.326 "enable_ktls": false 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "sock_impl_set_options", 00:18:12.326 "params": { 00:18:12.326 "impl_name": "posix", 00:18:12.326 "recv_buf_size": 2097152, 00:18:12.326 "send_buf_size": 2097152, 00:18:12.326 "enable_recv_pipe": true, 00:18:12.326 "enable_quickack": false, 00:18:12.326 "enable_placement_id": 0, 00:18:12.326 "enable_zerocopy_send_server": true, 00:18:12.326 "enable_zerocopy_send_client": false, 00:18:12.326 "zerocopy_threshold": 0, 00:18:12.326 "tls_version": 0, 00:18:12.326 "enable_ktls": false 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "sock_impl_set_options", 00:18:12.326 "params": { 00:18:12.326 "impl_name": "uring", 00:18:12.326 "recv_buf_size": 2097152, 00:18:12.326 "send_buf_size": 2097152, 00:18:12.326 "enable_recv_pipe": true, 00:18:12.326 "enable_quickack": false, 00:18:12.326 "enable_placement_id": 0, 00:18:12.326 "enable_zerocopy_send_server": false, 00:18:12.326 "enable_zerocopy_send_client": false, 00:18:12.326 "zerocopy_threshold": 0, 00:18:12.326 "tls_version": 0, 00:18:12.326 "enable_ktls": false 00:18:12.326 } 00:18:12.326 } 00:18:12.326 ] 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "subsystem": "vmd", 00:18:12.326 "config": [] 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "subsystem": "accel", 00:18:12.326 "config": [ 00:18:12.326 { 00:18:12.326 "method": "accel_set_options", 00:18:12.326 "params": { 00:18:12.326 "small_cache_size": 128, 00:18:12.326 "large_cache_size": 16, 00:18:12.326 "task_count": 2048, 00:18:12.326 "sequence_count": 2048, 00:18:12.326 "buf_count": 2048 00:18:12.326 } 00:18:12.326 } 00:18:12.326 ] 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "subsystem": "bdev", 00:18:12.326 "config": [ 00:18:12.326 { 00:18:12.326 "method": "bdev_set_options", 00:18:12.326 "params": { 00:18:12.326 "bdev_io_pool_size": 65535, 00:18:12.326 "bdev_io_cache_size": 256, 00:18:12.326 "bdev_auto_examine": true, 00:18:12.326 "iobuf_small_cache_size": 128, 00:18:12.326 "iobuf_large_cache_size": 16 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "bdev_raid_set_options", 00:18:12.326 "params": { 00:18:12.326 "process_window_size_kb": 1024 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "bdev_iscsi_set_options", 00:18:12.326 "params": { 00:18:12.326 "timeout_sec": 30 00:18:12.326 } 00:18:12.326 }, 00:18:12.326 { 00:18:12.326 "method": "bdev_nvme_set_options", 00:18:12.326 "params": { 00:18:12.326 "action_on_timeout": "none", 00:18:12.326 "timeout_us": 0, 00:18:12.326 "timeout_admin_us": 0, 00:18:12.326 "keep_alive_timeout_ms": 10000, 00:18:12.326 "arbitration_burst": 0, 00:18:12.326 "low_priority_weight": 0, 00:18:12.326 "medium_priority_weight": 0, 00:18:12.326 "high_priority_weight": 0, 00:18:12.326 "nvme_adminq_poll_period_us": 10000, 00:18:12.326 "nvme_ioq_poll_period_us": 0, 00:18:12.326 "io_queue_requests": 512, 00:18:12.327 "delay_cmd_submit": true, 00:18:12.327 "transport_retry_count": 4, 00:18:12.327 "bdev_retry_count": 3, 00:18:12.327 "transport_ack_timeout": 0, 00:18:12.327 "ctrlr_loss_timeout_sec": 0, 00:18:12.327 "reconnect_delay_sec": 0, 00:18:12.327 "fast_io_fail_timeout_sec": 0, 00:18:12.327 "disable_auto_failback": false, 00:18:12.327 "generate_uuids": false, 00:18:12.327 "transport_tos": 0, 00:18:12.327 "nvme_error_stat": false, 00:18:12.327 "rdma_srq_size": 0, 00:18:12.327 "io_path_stat": false, 00:18:12.327 "allow_accel_sequence": false, 00:18:12.327 "rdma_max_cq_size": 0, 00:18:12.327 "rdma_cm_event_timeout_ms": 0, 00:18:12.327 "dhchap_digests": [ 00:18:12.327 "sha256", 00:18:12.327 "sha384", 00:18:12.327 "sha512" 00:18:12.327 ], 00:18:12.327 "dhchap_dhgroups": [ 00:18:12.327 "null", 00:18:12.327 "ffdhe2048", 00:18:12.327 "ffdhe3072", 00:18:12.327 "ffdhe4096", 00:18:12.327 "ffdhe6144", 00:18:12.327 "ffdhe8192" 00:18:12.327 ] 00:18:12.327 } 00:18:12.327 }, 00:18:12.327 { 00:18:12.327 "method": "bdev_nvme_attach_controller", 00:18:12.327 "params": { 00:18:12.327 "name": "nvme0", 00:18:12.327 "trtype": "TCP", 00:18:12.327 "adrfam": "IPv4", 00:18:12.327 "traddr": "10.0.0.2", 00:18:12.327 "trsvcid": "4420", 00:18:12.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.327 "prchk_reftag": false, 00:18:12.327 "prchk_guard": false, 00:18:12.327 "ctrlr_loss_timeout_sec": 0, 00:18:12.327 "reconnect_delay_sec": 0, 00:18:12.327 "fast_io_fail_timeout_sec": 0, 00:18:12.327 "psk": "key0", 00:18:12.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.327 "hdgst": false, 00:18:12.327 "ddgst": false 00:18:12.327 } 00:18:12.327 }, 00:18:12.327 { 00:18:12.327 "method": "bdev_nvme_set_hotplug", 00:18:12.327 "params": { 00:18:12.327 "period_us": 100000, 00:18:12.327 "enable": false 00:18:12.327 } 00:18:12.327 }, 00:18:12.327 { 00:18:12.327 "method": "bdev_enable_histogram", 00:18:12.327 "params": { 00:18:12.327 "name": "nvme0n1", 00:18:12.327 "enable": true 00:18:12.327 } 00:18:12.327 }, 00:18:12.327 { 00:18:12.327 "method": "bdev_wait_for_examine" 00:18:12.327 } 00:18:12.327 ] 00:18:12.327 }, 00:18:12.327 { 00:18:12.327 "subsystem": "nbd", 00:18:12.327 "config": [] 00:18:12.327 } 00:18:12.327 ] 00:18:12.327 }' 00:18:12.327 03:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.327 [2024-07-13 03:07:18.689915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:12.327 [2024-07-13 03:07:18.690100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76966 ] 00:18:12.586 [2024-07-13 03:07:18.862789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.586 [2024-07-13 03:07:19.045677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.845 [2024-07-13 03:07:19.301814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.104 [2024-07-13 03:07:19.403928] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.104 03:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.104 03:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.104 03:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.104 03:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:13.363 03:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.363 03:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.622 Running I/O for 1 seconds... 00:18:14.558 00:18:14.558 Latency(us) 00:18:14.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.558 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:14.558 Verification LBA range: start 0x0 length 0x2000 00:18:14.558 nvme0n1 : 1.03 2772.84 10.83 0.00 0.00 45270.07 8043.05 27525.12 00:18:14.558 =================================================================================================================== 00:18:14.558 Total : 2772.84 10.83 0.00 0.00 45270.07 8043.05 27525.12 00:18:14.558 0 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:14.558 03:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:14.558 nvmf_trace.0 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 76966 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76966 ']' 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76966 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76966 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.816 killing process with pid 76966 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76966' 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76966 00:18:14.816 Received shutdown signal, test time was about 1.000000 seconds 00:18:14.816 00:18:14.816 Latency(us) 00:18:14.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.816 =================================================================================================================== 00:18:14.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.816 03:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76966 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.747 rmmod nvme_tcp 00:18:15.747 rmmod nvme_fabrics 00:18:15.747 rmmod nvme_keyring 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 76934 ']' 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 76934 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76934 ']' 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76934 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76934 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76934' 00:18:15.747 killing process with pid 76934 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76934 00:18:15.747 03:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76934 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.unf3Fg6ZTR /tmp/tmp.4cmnibX8D9 /tmp/tmp.oWtksKmxih 00:18:17.123 00:18:17.123 real 1m42.327s 00:18:17.123 user 2m44.180s 00:18:17.123 sys 0m26.160s 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.123 03:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.123 ************************************ 00:18:17.123 END TEST nvmf_tls 00:18:17.123 ************************************ 00:18:17.123 03:07:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:17.123 03:07:23 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:17.123 03:07:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.123 03:07:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.123 03:07:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.123 ************************************ 00:18:17.123 START TEST nvmf_fips 00:18:17.123 ************************************ 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:17.123 * Looking for test storage... 00:18:17.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.123 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:17.383 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:17.384 Error setting digest 00:18:17.384 00D2BC98C77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:17.384 00D2BC98C77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:17.384 Cannot find device "nvmf_tgt_br" 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.384 Cannot find device "nvmf_tgt_br2" 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:17.384 Cannot find device "nvmf_tgt_br" 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:17.384 Cannot find device "nvmf_tgt_br2" 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:18:17.384 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.643 03:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.643 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.643 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.643 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.643 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.643 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:17.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:17.644 00:18:17.644 --- 10.0.0.2 ping statistics --- 00:18:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.644 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:17.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:18:17.644 00:18:17.644 --- 10.0.0.3 ping statistics --- 00:18:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.644 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:17.644 00:18:17.644 --- 10.0.0.1 ping statistics --- 00:18:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.644 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=77257 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 77257 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77257 ']' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.644 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.903 03:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:17.903 [2024-07-13 03:07:24.266157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:17.903 [2024-07-13 03:07:24.266337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.161 [2024-07-13 03:07:24.432235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.504 [2024-07-13 03:07:24.660231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.504 [2024-07-13 03:07:24.660341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.504 [2024-07-13 03:07:24.660363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.504 [2024-07-13 03:07:24.660401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.504 [2024-07-13 03:07:24.660415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.504 [2024-07-13 03:07:24.660460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.504 [2024-07-13 03:07:24.872910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:18.762 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.020 [2024-07-13 03:07:25.378715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.020 [2024-07-13 03:07:25.394617] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.020 [2024-07-13 03:07:25.394873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.020 [2024-07-13 03:07:25.443057] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.020 malloc0 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=77291 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 77291 /var/tmp/bdevperf.sock 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77291 ']' 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.020 03:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:19.278 [2024-07-13 03:07:25.592496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:19.278 [2024-07-13 03:07:25.592665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77291 ] 00:18:19.278 [2024-07-13 03:07:25.754787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.536 [2024-07-13 03:07:25.948833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.794 [2024-07-13 03:07:26.140686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.053 03:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.053 03:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:20.053 03:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:20.311 [2024-07-13 03:07:26.774503] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.311 [2024-07-13 03:07:26.774711] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:20.569 TLSTESTn1 00:18:20.569 03:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.569 Running I/O for 10 seconds... 00:18:30.546 00:18:30.546 Latency(us) 00:18:30.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.546 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.546 Verification LBA range: start 0x0 length 0x2000 00:18:30.546 TLSTESTn1 : 10.03 2898.34 11.32 0.00 0.00 44071.58 12571.00 30027.40 00:18:30.546 =================================================================================================================== 00:18:30.546 Total : 2898.34 11.32 0.00 0.00 44071.58 12571.00 30027.40 00:18:30.546 0 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:30.546 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:30.805 nvmf_trace.0 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77291 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77291 ']' 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77291 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77291 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77291' 00:18:30.805 killing process with pid 77291 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77291 00:18:30.805 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.805 00:18:30.805 Latency(us) 00:18:30.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.805 =================================================================================================================== 00:18:30.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.805 [2024-07-13 03:07:37.154170] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.805 03:07:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77291 00:18:31.738 03:07:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:31.738 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.738 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.996 rmmod nvme_tcp 00:18:31.996 rmmod nvme_fabrics 00:18:31.996 rmmod nvme_keyring 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 77257 ']' 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 77257 00:18:31.996 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77257 ']' 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77257 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77257 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:31.997 killing process with pid 77257 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77257' 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77257 00:18:31.997 [2024-07-13 03:07:38.324964] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:31.997 03:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77257 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:33.372 00:18:33.372 real 0m16.202s 00:18:33.372 user 0m22.923s 00:18:33.372 sys 0m5.422s 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:33.372 ************************************ 00:18:33.372 END TEST nvmf_fips 00:18:33.372 ************************************ 00:18:33.372 03:07:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:33.372 03:07:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:18:33.372 03:07:39 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:33.372 03:07:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:33.372 03:07:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.372 03:07:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.372 ************************************ 00:18:33.372 START TEST nvmf_fuzz 00:18:33.372 ************************************ 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:33.372 * Looking for test storage... 00:18:33.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.372 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.373 03:07:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:33.632 Cannot find device "nvmf_tgt_br" 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.632 Cannot find device "nvmf_tgt_br2" 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:33.632 Cannot find device "nvmf_tgt_br" 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:33.632 Cannot find device "nvmf_tgt_br2" 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:33.632 03:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:33.632 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:33.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:33.890 00:18:33.890 --- 10.0.0.2 ping statistics --- 00:18:33.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.890 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:33.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:33.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:33.890 00:18:33.890 --- 10.0.0.3 ping statistics --- 00:18:33.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.890 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:33.890 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:33.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:33.891 00:18:33.891 --- 10.0.0.1 ping statistics --- 00:18:33.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.891 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77628 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77628 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 77628 ']' 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.891 03:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.826 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.084 Malloc0 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:35.084 03:07:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:36.019 Shutting down the fuzz application 00:18:36.019 03:07:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:36.951 Shutting down the fuzz application 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.951 rmmod nvme_tcp 00:18:36.951 rmmod nvme_fabrics 00:18:36.951 rmmod nvme_keyring 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 77628 ']' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 77628 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 77628 ']' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 77628 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77628 00:18:36.951 killing process with pid 77628 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77628' 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 77628 00:18:36.951 03:07:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 77628 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:38.326 00:18:38.326 real 0m4.983s 00:18:38.326 user 0m6.013s 00:18:38.326 sys 0m0.844s 00:18:38.326 ************************************ 00:18:38.326 END TEST nvmf_fuzz 00:18:38.326 ************************************ 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.326 03:07:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:38.326 03:07:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:38.326 03:07:44 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:38.326 03:07:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.326 03:07:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.326 03:07:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:38.326 ************************************ 00:18:38.326 START TEST nvmf_multiconnection 00:18:38.326 ************************************ 00:18:38.326 03:07:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:38.586 * Looking for test storage... 00:18:38.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:38.586 Cannot find device "nvmf_tgt_br" 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.586 Cannot find device "nvmf_tgt_br2" 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:38.586 Cannot find device "nvmf_tgt_br" 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:38.586 Cannot find device "nvmf_tgt_br2" 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:18:38.586 03:07:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.586 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:38.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:38.846 00:18:38.846 --- 10.0.0.2 ping statistics --- 00:18:38.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.846 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:38.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:38.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:38.846 00:18:38.846 --- 10.0.0.3 ping statistics --- 00:18:38.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.846 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:18:38.846 00:18:38.846 --- 10.0.0.1 ping statistics --- 00:18:38.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.846 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=77870 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 77870 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 77870 ']' 00:18:38.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.846 03:07:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:39.105 [2024-07-13 03:07:45.384206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:39.105 [2024-07-13 03:07:45.384390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.105 [2024-07-13 03:07:45.565416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.363 [2024-07-13 03:07:45.814069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.363 [2024-07-13 03:07:45.814557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.363 [2024-07-13 03:07:45.814729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.363 [2024-07-13 03:07:45.814851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.363 [2024-07-13 03:07:45.815016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.363 [2024-07-13 03:07:45.815288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.363 [2024-07-13 03:07:45.815502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.363 [2024-07-13 03:07:45.816241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.363 [2024-07-13 03:07:45.816243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.622 [2024-07-13 03:07:46.010927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.880 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.880 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:18:39.880 03:07:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.880 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.880 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 [2024-07-13 03:07:46.389351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 Malloc1 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.139 [2024-07-13 03:07:46.508231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.139 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.140 Malloc2 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.140 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 Malloc3 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 Malloc4 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 Malloc5 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.398 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.399 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 Malloc6 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 Malloc7 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 Malloc8 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.658 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.917 Malloc9 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:40.917 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 Malloc10 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.918 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 Malloc11 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.176 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:41.177 03:07:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:43.709 03:07:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.610 03:07:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:45.611 03:07:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:45.611 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:45.611 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.611 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:45.611 03:07:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.542 03:07:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:47.818 03:07:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:47.818 03:07:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:47.818 03:07:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.818 03:07:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:47.818 03:07:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:49.728 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:49.728 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:49.728 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:18:49.728 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:49.729 03:07:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:52.259 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:52.260 03:07:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:54.162 03:08:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.067 03:08:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:56.326 03:08:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:56.326 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:56.326 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.326 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:56.326 03:08:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:58.225 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:58.225 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.226 03:08:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:58.483 03:08:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:58.483 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:58.483 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.483 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:58.483 03:08:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.386 03:08:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:00.644 03:08:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:00.644 03:08:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.644 03:08:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.644 03:08:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:00.644 03:08:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:02.544 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:02.544 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:02.544 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:02.802 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.803 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:02.803 03:08:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:05.332 03:08:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:05.332 [global] 00:19:05.332 thread=1 00:19:05.332 invalidate=1 00:19:05.332 rw=read 00:19:05.332 time_based=1 00:19:05.332 runtime=10 00:19:05.332 ioengine=libaio 00:19:05.332 direct=1 00:19:05.332 bs=262144 00:19:05.332 iodepth=64 00:19:05.332 norandommap=1 00:19:05.332 numjobs=1 00:19:05.332 00:19:05.332 [job0] 00:19:05.332 filename=/dev/nvme0n1 00:19:05.332 [job1] 00:19:05.332 filename=/dev/nvme10n1 00:19:05.332 [job2] 00:19:05.332 filename=/dev/nvme1n1 00:19:05.332 [job3] 00:19:05.332 filename=/dev/nvme2n1 00:19:05.332 [job4] 00:19:05.332 filename=/dev/nvme3n1 00:19:05.332 [job5] 00:19:05.332 filename=/dev/nvme4n1 00:19:05.332 [job6] 00:19:05.332 filename=/dev/nvme5n1 00:19:05.332 [job7] 00:19:05.332 filename=/dev/nvme6n1 00:19:05.332 [job8] 00:19:05.332 filename=/dev/nvme7n1 00:19:05.332 [job9] 00:19:05.332 filename=/dev/nvme8n1 00:19:05.332 [job10] 00:19:05.332 filename=/dev/nvme9n1 00:19:05.332 Could not set queue depth (nvme0n1) 00:19:05.332 Could not set queue depth (nvme10n1) 00:19:05.332 Could not set queue depth (nvme1n1) 00:19:05.332 Could not set queue depth (nvme2n1) 00:19:05.332 Could not set queue depth (nvme3n1) 00:19:05.332 Could not set queue depth (nvme4n1) 00:19:05.333 Could not set queue depth (nvme5n1) 00:19:05.333 Could not set queue depth (nvme6n1) 00:19:05.333 Could not set queue depth (nvme7n1) 00:19:05.333 Could not set queue depth (nvme8n1) 00:19:05.333 Could not set queue depth (nvme9n1) 00:19:05.333 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.333 fio-3.35 00:19:05.333 Starting 11 threads 00:19:17.535 00:19:17.535 job0: (groupid=0, jobs=1): err= 0: pid=78330: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=482, BW=121MiB/s (126MB/s)(1218MiB/10104msec) 00:19:17.535 slat (usec): min=22, max=48688, avg=2046.20, stdev=4443.98 00:19:17.535 clat (msec): min=56, max=230, avg=130.54, stdev=13.94 00:19:17.535 lat (msec): min=57, max=238, avg=132.59, stdev=14.19 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 83], 5.00th=[ 107], 10.00th=[ 120], 20.00th=[ 126], 00:19:17.535 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 132], 60.00th=[ 134], 00:19:17.535 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:19:17.535 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 226], 99.95th=[ 230], 00:19:17.535 | 99.99th=[ 232] 00:19:17.535 bw ( KiB/s): min=114459, max=143360, per=8.48%, avg=123074.80, stdev=6482.15, samples=20 00:19:17.535 iops : min= 447, max= 560, avg=480.75, stdev=25.33, samples=20 00:19:17.535 lat (msec) : 100=3.49%, 250=96.51% 00:19:17.535 cpu : usr=0.41%, sys=2.31%, ctx=1160, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=4872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job1: (groupid=0, jobs=1): err= 0: pid=78331: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=481, BW=120MiB/s (126MB/s)(1216MiB/10099msec) 00:19:17.535 slat (usec): min=22, max=33058, avg=2051.87, stdev=4349.73 00:19:17.535 clat (msec): min=39, max=229, avg=130.72, stdev=15.88 00:19:17.535 lat (msec): min=40, max=237, avg=132.77, stdev=16.18 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 61], 5.00th=[ 103], 10.00th=[ 122], 20.00th=[ 127], 00:19:17.535 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 134], 00:19:17.535 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:19:17.535 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 230], 99.95th=[ 230], 00:19:17.535 | 99.99th=[ 230] 00:19:17.535 bw ( KiB/s): min=113664, max=154624, per=8.46%, avg=122842.90, stdev=9196.95, samples=20 00:19:17.535 iops : min= 444, max= 604, avg=479.85, stdev=35.93, samples=20 00:19:17.535 lat (msec) : 50=0.74%, 100=3.50%, 250=95.76% 00:19:17.535 cpu : usr=0.31%, sys=1.95%, ctx=1167, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job2: (groupid=0, jobs=1): err= 0: pid=78332: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=586, BW=147MiB/s (154MB/s)(1470MiB/10029msec) 00:19:17.535 slat (usec): min=18, max=103079, avg=1696.84, stdev=4010.84 00:19:17.535 clat (msec): min=20, max=197, avg=107.40, stdev=15.49 00:19:17.535 lat (msec): min=30, max=219, avg=109.10, stdev=15.49 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 100], 00:19:17.535 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:19:17.535 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 136], 00:19:17.535 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 197], 99.95th=[ 199], 00:19:17.535 | 99.99th=[ 199] 00:19:17.535 bw ( KiB/s): min=80032, max=160256, per=10.26%, avg=148857.25, stdev=18496.68, samples=20 00:19:17.535 iops : min= 312, max= 626, avg=581.40, stdev=72.38, samples=20 00:19:17.535 lat (msec) : 50=0.29%, 100=24.92%, 250=74.79% 00:19:17.535 cpu : usr=0.30%, sys=2.46%, ctx=1392, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=5878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job3: (groupid=0, jobs=1): err= 0: pid=78333: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=473, BW=118MiB/s (124MB/s)(1196MiB/10104msec) 00:19:17.535 slat (usec): min=20, max=78471, avg=2060.82, stdev=4626.25 00:19:17.535 clat (msec): min=80, max=230, avg=132.98, stdev=11.55 00:19:17.535 lat (msec): min=83, max=235, avg=135.04, stdev=11.86 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 89], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 128], 00:19:17.535 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 136], 00:19:17.535 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 148], 00:19:17.535 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 218], 99.95th=[ 232], 00:19:17.535 | 99.99th=[ 232] 00:19:17.535 bw ( KiB/s): min=113152, max=126464, per=8.33%, avg=120821.70, stdev=3672.52, samples=20 00:19:17.535 iops : min= 442, max= 494, avg=471.95, stdev=14.35, samples=20 00:19:17.535 lat (msec) : 100=2.07%, 250=97.93% 00:19:17.535 cpu : usr=0.32%, sys=2.18%, ctx=1177, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job4: (groupid=0, jobs=1): err= 0: pid=78334: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=479, BW=120MiB/s (126MB/s)(1210MiB/10097msec) 00:19:17.535 slat (usec): min=19, max=51402, avg=2060.72, stdev=4401.64 00:19:17.535 clat (msec): min=62, max=223, avg=131.30, stdev=12.25 00:19:17.535 lat (msec): min=64, max=223, avg=133.36, stdev=12.52 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 90], 5.00th=[ 106], 10.00th=[ 121], 20.00th=[ 128], 00:19:17.535 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 134], 00:19:17.535 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:19:17.535 | 99.00th=[ 153], 99.50th=[ 182], 99.90th=[ 213], 99.95th=[ 213], 00:19:17.535 | 99.99th=[ 224] 00:19:17.535 bw ( KiB/s): min=115200, max=139264, per=8.43%, avg=122318.30, stdev=5519.92, samples=20 00:19:17.535 iops : min= 450, max= 544, avg=477.80, stdev=21.56, samples=20 00:19:17.535 lat (msec) : 100=3.47%, 250=96.53% 00:19:17.535 cpu : usr=0.30%, sys=2.24%, ctx=1172, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=4841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job5: (groupid=0, jobs=1): err= 0: pid=78335: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=485, BW=121MiB/s (127MB/s)(1227MiB/10102msec) 00:19:17.535 slat (usec): min=22, max=39998, avg=2034.02, stdev=4370.18 00:19:17.535 clat (msec): min=31, max=222, avg=129.52, stdev=15.14 00:19:17.535 lat (msec): min=31, max=222, avg=131.55, stdev=15.41 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 64], 5.00th=[ 103], 10.00th=[ 120], 20.00th=[ 126], 00:19:17.535 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 132], 60.00th=[ 133], 00:19:17.535 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:19:17.535 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 215], 99.95th=[ 222], 00:19:17.535 | 99.99th=[ 222] 00:19:17.535 bw ( KiB/s): min=115200, max=155446, per=8.54%, avg=124010.30, stdev=8560.29, samples=20 00:19:17.535 iops : min= 450, max= 607, avg=484.40, stdev=33.40, samples=20 00:19:17.535 lat (msec) : 50=0.73%, 100=3.75%, 250=95.52% 00:19:17.535 cpu : usr=0.22%, sys=2.26%, ctx=1150, majf=0, minf=4097 00:19:17.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.535 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.535 job6: (groupid=0, jobs=1): err= 0: pid=78336: Sat Jul 13 03:08:21 2024 00:19:17.535 read: IOPS=481, BW=120MiB/s (126MB/s)(1216MiB/10103msec) 00:19:17.535 slat (usec): min=22, max=54645, avg=2051.42, stdev=4422.68 00:19:17.535 clat (msec): min=69, max=226, avg=130.80, stdev=12.76 00:19:17.535 lat (msec): min=69, max=239, avg=132.85, stdev=13.01 00:19:17.535 clat percentiles (msec): 00:19:17.535 | 1.00th=[ 92], 5.00th=[ 107], 10.00th=[ 118], 20.00th=[ 126], 00:19:17.535 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 132], 60.00th=[ 134], 00:19:17.536 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:19:17.536 | 99.00th=[ 159], 99.50th=[ 197], 99.90th=[ 226], 99.95th=[ 226], 00:19:17.536 | 99.99th=[ 226] 00:19:17.536 bw ( KiB/s): min=114688, max=146944, per=8.46%, avg=122818.50, stdev=7036.51, samples=20 00:19:17.536 iops : min= 448, max= 574, avg=479.75, stdev=27.49, samples=20 00:19:17.536 lat (msec) : 100=2.80%, 250=97.20% 00:19:17.536 cpu : usr=0.23%, sys=2.10%, ctx=1160, majf=0, minf=4097 00:19:17.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.536 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.536 job7: (groupid=0, jobs=1): err= 0: pid=78337: Sat Jul 13 03:08:21 2024 00:19:17.536 read: IOPS=476, BW=119MiB/s (125MB/s)(1205MiB/10107msec) 00:19:17.536 slat (usec): min=22, max=52216, avg=2069.14, stdev=4575.35 00:19:17.536 clat (msec): min=48, max=240, avg=132.05, stdev=14.11 00:19:17.536 lat (msec): min=48, max=240, avg=134.12, stdev=14.38 00:19:17.536 clat percentiles (msec): 00:19:17.536 | 1.00th=[ 83], 5.00th=[ 110], 10.00th=[ 122], 20.00th=[ 128], 00:19:17.536 | 30.00th=[ 130], 40.00th=[ 132], 50.00th=[ 134], 60.00th=[ 136], 00:19:17.536 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 146], 00:19:17.536 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 230], 99.95th=[ 236], 00:19:17.536 | 99.99th=[ 241] 00:19:17.536 bw ( KiB/s): min=113152, max=140288, per=8.39%, avg=121693.15, stdev=7253.55, samples=20 00:19:17.536 iops : min= 442, max= 548, avg=475.35, stdev=28.35, samples=20 00:19:17.536 lat (msec) : 50=0.35%, 100=2.37%, 250=97.28% 00:19:17.536 cpu : usr=0.30%, sys=2.38%, ctx=1118, majf=0, minf=4097 00:19:17.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.536 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.536 job8: (groupid=0, jobs=1): err= 0: pid=78338: Sat Jul 13 03:08:21 2024 00:19:17.536 read: IOPS=555, BW=139MiB/s (146MB/s)(1402MiB/10099msec) 00:19:17.536 slat (usec): min=17, max=80054, avg=1777.72, stdev=4117.10 00:19:17.536 clat (msec): min=10, max=234, avg=113.40, stdev=40.99 00:19:17.536 lat (msec): min=11, max=234, avg=115.18, stdev=41.62 00:19:17.536 clat percentiles (msec): 00:19:17.536 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 43], 00:19:17.536 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:19:17.536 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 148], 00:19:17.536 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 203], 99.95th=[ 230], 00:19:17.536 | 99.99th=[ 236] 00:19:17.536 bw ( KiB/s): min=113664, max=398848, per=9.78%, avg=141889.00, stdev=67884.67, samples=20 00:19:17.536 iops : min= 444, max= 1558, avg=554.25, stdev=265.18, samples=20 00:19:17.536 lat (msec) : 20=0.86%, 50=19.55%, 100=1.32%, 250=78.27% 00:19:17.536 cpu : usr=0.35%, sys=2.05%, ctx=1317, majf=0, minf=4097 00:19:17.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:17.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.536 issued rwts: total=5606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.536 job9: (groupid=0, jobs=1): err= 0: pid=78339: Sat Jul 13 03:08:21 2024 00:19:17.536 read: IOPS=589, BW=147MiB/s (155MB/s)(1479MiB/10033msec) 00:19:17.536 slat (usec): min=18, max=58354, avg=1686.63, stdev=3738.15 00:19:17.536 clat (msec): min=23, max=173, avg=106.79, stdev=12.59 00:19:17.536 lat (msec): min=34, max=173, avg=108.48, stdev=12.59 00:19:17.536 clat percentiles (msec): 00:19:17.536 | 1.00th=[ 85], 5.00th=[ 92], 10.00th=[ 96], 20.00th=[ 100], 00:19:17.536 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:19:17.536 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 133], 00:19:17.536 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 174], 00:19:17.536 | 99.99th=[ 174] 00:19:17.536 bw ( KiB/s): min=105261, max=160768, per=10.32%, avg=149729.80, stdev=13942.36, samples=20 00:19:17.536 iops : min= 411, max= 628, avg=584.75, stdev=54.49, samples=20 00:19:17.536 lat (msec) : 50=0.17%, 100=23.71%, 250=76.12% 00:19:17.536 cpu : usr=0.38%, sys=2.70%, ctx=1337, majf=0, minf=4097 00:19:17.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:17.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.536 issued rwts: total=5914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.536 job10: (groupid=0, jobs=1): err= 0: pid=78340: Sat Jul 13 03:08:21 2024 00:19:17.536 read: IOPS=593, BW=148MiB/s (156MB/s)(1488MiB/10031msec) 00:19:17.536 slat (usec): min=18, max=47154, avg=1674.93, stdev=3746.43 00:19:17.536 clat (msec): min=29, max=154, avg=106.04, stdev=11.90 00:19:17.536 lat (msec): min=32, max=167, avg=107.72, stdev=12.00 00:19:17.536 clat percentiles (msec): 00:19:17.536 | 1.00th=[ 80], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 99], 00:19:17.536 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:19:17.536 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 117], 95.00th=[ 129], 00:19:17.536 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:19:17.536 | 99.99th=[ 155] 00:19:17.536 bw ( KiB/s): min=118784, max=162304, per=10.39%, avg=150738.80, stdev=11261.22, samples=20 00:19:17.536 iops : min= 464, max= 634, avg=588.70, stdev=44.00, samples=20 00:19:17.536 lat (msec) : 50=0.47%, 100=24.98%, 250=74.55% 00:19:17.536 cpu : usr=0.24%, sys=2.42%, ctx=1360, majf=0, minf=4097 00:19:17.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:17.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.536 issued rwts: total=5953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.536 00:19:17.536 Run status group 0 (all jobs): 00:19:17.536 READ: bw=1417MiB/s (1486MB/s), 118MiB/s-148MiB/s (124MB/s-156MB/s), io=14.0GiB (15.0GB), run=10029-10107msec 00:19:17.536 00:19:17.536 Disk stats (read/write): 00:19:17.536 nvme0n1: ios=9640/0, merge=0/0, ticks=1231642/0, in_queue=1231642, util=98.03% 00:19:17.536 nvme10n1: ios=9618/0, merge=0/0, ticks=1230019/0, in_queue=1230019, util=98.15% 00:19:17.536 nvme1n1: ios=11692/0, merge=0/0, ticks=1240647/0, in_queue=1240647, util=98.29% 00:19:17.536 nvme2n1: ios=9462/0, merge=0/0, ticks=1231878/0, in_queue=1231878, util=98.44% 00:19:17.536 nvme3n1: ios=9567/0, merge=0/0, ticks=1230045/0, in_queue=1230045, util=98.37% 00:19:17.536 nvme4n1: ios=9705/0, merge=0/0, ticks=1231455/0, in_queue=1231455, util=98.56% 00:19:17.536 nvme5n1: ios=9618/0, merge=0/0, ticks=1230675/0, in_queue=1230675, util=98.75% 00:19:17.536 nvme6n1: ios=9536/0, merge=0/0, ticks=1232904/0, in_queue=1232904, util=98.82% 00:19:17.536 nvme7n1: ios=11095/0, merge=0/0, ticks=1230938/0, in_queue=1230938, util=98.93% 00:19:17.536 nvme8n1: ios=11438/0, merge=0/0, ticks=1208357/0, in_queue=1208357, util=99.08% 00:19:17.536 nvme9n1: ios=11518/0, merge=0/0, ticks=1208177/0, in_queue=1208177, util=99.13% 00:19:17.536 03:08:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:17.536 [global] 00:19:17.536 thread=1 00:19:17.536 invalidate=1 00:19:17.536 rw=randwrite 00:19:17.536 time_based=1 00:19:17.536 runtime=10 00:19:17.536 ioengine=libaio 00:19:17.536 direct=1 00:19:17.536 bs=262144 00:19:17.536 iodepth=64 00:19:17.536 norandommap=1 00:19:17.536 numjobs=1 00:19:17.536 00:19:17.536 [job0] 00:19:17.536 filename=/dev/nvme0n1 00:19:17.536 [job1] 00:19:17.536 filename=/dev/nvme10n1 00:19:17.536 [job2] 00:19:17.536 filename=/dev/nvme1n1 00:19:17.536 [job3] 00:19:17.536 filename=/dev/nvme2n1 00:19:17.536 [job4] 00:19:17.536 filename=/dev/nvme3n1 00:19:17.536 [job5] 00:19:17.536 filename=/dev/nvme4n1 00:19:17.536 [job6] 00:19:17.536 filename=/dev/nvme5n1 00:19:17.536 [job7] 00:19:17.536 filename=/dev/nvme6n1 00:19:17.536 [job8] 00:19:17.536 filename=/dev/nvme7n1 00:19:17.536 [job9] 00:19:17.536 filename=/dev/nvme8n1 00:19:17.536 [job10] 00:19:17.536 filename=/dev/nvme9n1 00:19:17.536 Could not set queue depth (nvme0n1) 00:19:17.536 Could not set queue depth (nvme10n1) 00:19:17.536 Could not set queue depth (nvme1n1) 00:19:17.536 Could not set queue depth (nvme2n1) 00:19:17.536 Could not set queue depth (nvme3n1) 00:19:17.536 Could not set queue depth (nvme4n1) 00:19:17.536 Could not set queue depth (nvme5n1) 00:19:17.536 Could not set queue depth (nvme6n1) 00:19:17.536 Could not set queue depth (nvme7n1) 00:19:17.536 Could not set queue depth (nvme8n1) 00:19:17.536 Could not set queue depth (nvme9n1) 00:19:17.536 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.536 fio-3.35 00:19:17.536 Starting 11 threads 00:19:27.559 00:19:27.559 job0: (groupid=0, jobs=1): err= 0: pid=78535: Sat Jul 13 03:08:32 2024 00:19:27.559 write: IOPS=589, BW=147MiB/s (155MB/s)(1493MiB/10120msec); 0 zone resets 00:19:27.559 slat (usec): min=17, max=11046, avg=1669.51, stdev=2833.20 00:19:27.559 clat (msec): min=12, max=230, avg=106.77, stdev=10.94 00:19:27.559 lat (msec): min=12, max=230, avg=108.44, stdev=10.69 00:19:27.559 clat percentiles (msec): 00:19:27.559 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 102], 20.00th=[ 103], 00:19:27.559 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:19:27.559 | 70.00th=[ 111], 80.00th=[ 111], 90.00th=[ 112], 95.00th=[ 113], 00:19:27.559 | 99.00th=[ 123], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:19:27.559 | 99.99th=[ 232] 00:19:27.559 bw ( KiB/s): min=145408, max=160256, per=11.94%, avg=151219.20, stdev=5150.29, samples=20 00:19:27.559 iops : min= 568, max= 626, avg=590.70, stdev=20.12, samples=20 00:19:27.559 lat (msec) : 20=0.13%, 50=0.34%, 100=8.31%, 250=91.22% 00:19:27.559 cpu : usr=1.03%, sys=1.81%, ctx=6819, majf=0, minf=1 00:19:27.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:27.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.559 issued rwts: total=0,5970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.559 job1: (groupid=0, jobs=1): err= 0: pid=78537: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=438, BW=110MiB/s (115MB/s)(1105MiB/10069msec); 0 zone resets 00:19:27.560 slat (usec): min=19, max=10826, avg=2193.47, stdev=3856.44 00:19:27.560 clat (msec): min=10, max=209, avg=143.58, stdev=17.10 00:19:27.560 lat (msec): min=10, max=209, avg=145.78, stdev=17.06 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 39], 5.00th=[ 132], 10.00th=[ 134], 20.00th=[ 142], 00:19:27.560 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 150], 00:19:27.560 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:19:27.560 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 201], 99.95th=[ 205], 00:19:27.560 | 99.99th=[ 209] 00:19:27.560 bw ( KiB/s): min=106496, max=129024, per=8.80%, avg=111488.00, stdev=5423.27, samples=20 00:19:27.560 iops : min= 416, max= 504, avg=435.50, stdev=21.18, samples=20 00:19:27.560 lat (msec) : 20=0.20%, 50=1.04%, 100=1.49%, 250=97.26% 00:19:27.560 cpu : usr=0.91%, sys=1.34%, ctx=5160, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,4418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job2: (groupid=0, jobs=1): err= 0: pid=78549: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=334, BW=83.5MiB/s (87.6MB/s)(848MiB/10151msec); 0 zone resets 00:19:27.560 slat (usec): min=17, max=83317, avg=2942.25, stdev=5265.50 00:19:27.560 clat (msec): min=23, max=333, avg=188.51, stdev=21.91 00:19:27.560 lat (msec): min=23, max=333, avg=191.45, stdev=21.65 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 65], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:19:27.560 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 194], 00:19:27.560 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 201], 00:19:27.560 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 334], 00:19:27.560 | 99.99th=[ 334] 00:19:27.560 bw ( KiB/s): min=81920, max=90112, per=6.73%, avg=85222.40, stdev=2726.89, samples=20 00:19:27.560 iops : min= 320, max= 352, avg=332.90, stdev=10.65, samples=20 00:19:27.560 lat (msec) : 50=0.59%, 100=1.06%, 250=97.46%, 500=0.88% 00:19:27.560 cpu : usr=0.57%, sys=1.06%, ctx=4079, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,3392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job3: (groupid=0, jobs=1): err= 0: pid=78550: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=589, BW=147MiB/s (155MB/s)(1493MiB/10120msec); 0 zone resets 00:19:27.560 slat (usec): min=18, max=9244, avg=1670.03, stdev=2835.60 00:19:27.560 clat (msec): min=11, max=228, avg=106.77, stdev=10.80 00:19:27.560 lat (msec): min=11, max=228, avg=108.44, stdev=10.54 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 102], 20.00th=[ 103], 00:19:27.560 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:19:27.560 | 70.00th=[ 111], 80.00th=[ 111], 90.00th=[ 112], 95.00th=[ 113], 00:19:27.560 | 99.00th=[ 124], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:19:27.560 | 99.99th=[ 230] 00:19:27.560 bw ( KiB/s): min=145699, max=159936, per=11.95%, avg=151311.35, stdev=5060.40, samples=20 00:19:27.560 iops : min= 569, max= 624, avg=590.95, stdev=19.73, samples=20 00:19:27.560 lat (msec) : 20=0.10%, 50=0.30%, 100=8.41%, 250=91.19% 00:19:27.560 cpu : usr=1.08%, sys=1.54%, ctx=7831, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,5970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job4: (groupid=0, jobs=1): err= 0: pid=78551: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=336, BW=84.2MiB/s (88.3MB/s)(850MiB/10096msec); 0 zone resets 00:19:27.560 slat (usec): min=16, max=59872, avg=2913.91, stdev=5182.87 00:19:27.560 clat (msec): min=32, max=271, avg=187.08, stdev=18.48 00:19:27.560 lat (msec): min=35, max=271, avg=189.99, stdev=18.21 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 94], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 182], 00:19:27.560 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 194], 00:19:27.560 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 201], 00:19:27.560 | 99.00th=[ 203], 99.50th=[ 226], 99.90th=[ 259], 99.95th=[ 271], 00:19:27.560 | 99.99th=[ 271] 00:19:27.560 bw ( KiB/s): min=81920, max=93696, per=6.75%, avg=85418.95, stdev=3364.43, samples=20 00:19:27.560 iops : min= 320, max= 366, avg=333.65, stdev=13.16, samples=20 00:19:27.560 lat (msec) : 50=0.44%, 100=0.68%, 250=98.68%, 500=0.21% 00:19:27.560 cpu : usr=0.61%, sys=1.11%, ctx=4686, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,3400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job5: (groupid=0, jobs=1): err= 0: pid=78552: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=330, BW=82.7MiB/s (86.7MB/s)(839MiB/10143msec); 0 zone resets 00:19:27.560 slat (usec): min=16, max=132866, avg=2974.37, stdev=5571.50 00:19:27.560 clat (msec): min=135, max=325, avg=190.38, stdev=14.34 00:19:27.560 lat (msec): min=135, max=325, avg=193.36, stdev=13.48 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 163], 5.00th=[ 171], 10.00th=[ 180], 20.00th=[ 182], 00:19:27.560 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 194], 00:19:27.560 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 201], 00:19:27.560 | 99.00th=[ 257], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 326], 00:19:27.560 | 99.99th=[ 326] 00:19:27.560 bw ( KiB/s): min=67584, max=90112, per=6.66%, avg=84300.80, stdev=4792.27, samples=20 00:19:27.560 iops : min= 264, max= 352, avg=329.30, stdev=18.72, samples=20 00:19:27.560 lat (msec) : 250=98.78%, 500=1.22% 00:19:27.560 cpu : usr=0.56%, sys=0.89%, ctx=3945, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,3356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job6: (groupid=0, jobs=1): err= 0: pid=78553: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=572, BW=143MiB/s (150MB/s)(1446MiB/10108msec); 0 zone resets 00:19:27.560 slat (usec): min=16, max=19438, avg=1723.98, stdev=2934.73 00:19:27.560 clat (msec): min=22, max=227, avg=110.11, stdev=10.04 00:19:27.560 lat (msec): min=22, max=227, avg=111.84, stdev= 9.70 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 100], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 107], 00:19:27.560 | 30.00th=[ 108], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 112], 00:19:27.560 | 70.00th=[ 113], 80.00th=[ 114], 90.00th=[ 115], 95.00th=[ 116], 00:19:27.560 | 99.00th=[ 130], 99.50th=[ 182], 99.90th=[ 222], 99.95th=[ 222], 00:19:27.560 | 99.99th=[ 228] 00:19:27.560 bw ( KiB/s): min=141312, max=154112, per=11.56%, avg=146420.55, stdev=4169.85, samples=20 00:19:27.560 iops : min= 552, max= 602, avg=571.95, stdev=16.29, samples=20 00:19:27.560 lat (msec) : 50=0.35%, 100=2.51%, 250=97.15% 00:19:27.560 cpu : usr=0.85%, sys=1.66%, ctx=7622, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,5782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job7: (groupid=0, jobs=1): err= 0: pid=78554: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=331, BW=82.8MiB/s (86.8MB/s)(840MiB/10145msec); 0 zone resets 00:19:27.560 slat (usec): min=19, max=99496, avg=2969.92, stdev=5376.40 00:19:27.560 clat (msec): min=103, max=330, avg=190.17, stdev=14.47 00:19:27.560 lat (msec): min=103, max=331, avg=193.14, stdev=13.70 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 150], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 182], 00:19:27.560 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:19:27.560 | 70.00th=[ 197], 80.00th=[ 197], 90.00th=[ 199], 95.00th=[ 201], 00:19:27.560 | 99.00th=[ 239], 99.50th=[ 284], 99.90th=[ 321], 99.95th=[ 330], 00:19:27.560 | 99.99th=[ 330] 00:19:27.560 bw ( KiB/s): min=75776, max=90112, per=6.67%, avg=84403.20, stdev=3157.38, samples=20 00:19:27.560 iops : min= 296, max= 352, avg=329.70, stdev=12.33, samples=20 00:19:27.560 lat (msec) : 250=99.23%, 500=0.77% 00:19:27.560 cpu : usr=0.59%, sys=1.06%, ctx=4145, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.560 issued rwts: total=0,3360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.560 job8: (groupid=0, jobs=1): err= 0: pid=78555: Sat Jul 13 03:08:32 2024 00:19:27.560 write: IOPS=575, BW=144MiB/s (151MB/s)(1456MiB/10116msec); 0 zone resets 00:19:27.560 slat (usec): min=18, max=12344, avg=1693.62, stdev=2915.70 00:19:27.560 clat (msec): min=14, max=227, avg=109.43, stdev=11.35 00:19:27.560 lat (msec): min=14, max=227, avg=111.12, stdev=11.15 00:19:27.560 clat percentiles (msec): 00:19:27.560 | 1.00th=[ 69], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 107], 00:19:27.560 | 30.00th=[ 107], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:19:27.560 | 70.00th=[ 113], 80.00th=[ 114], 90.00th=[ 115], 95.00th=[ 116], 00:19:27.560 | 99.00th=[ 122], 99.50th=[ 182], 99.90th=[ 222], 99.95th=[ 222], 00:19:27.560 | 99.99th=[ 228] 00:19:27.560 bw ( KiB/s): min=141312, max=160768, per=11.65%, avg=147481.60, stdev=5058.94, samples=20 00:19:27.560 iops : min= 552, max= 628, avg=576.10, stdev=19.76, samples=20 00:19:27.560 lat (msec) : 20=0.07%, 50=0.43%, 100=3.69%, 250=95.81% 00:19:27.560 cpu : usr=0.95%, sys=1.76%, ctx=5363, majf=0, minf=1 00:19:27.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:27.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.561 issued rwts: total=0,5824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.561 job9: (groupid=0, jobs=1): err= 0: pid=78556: Sat Jul 13 03:08:32 2024 00:19:27.561 write: IOPS=432, BW=108MiB/s (113MB/s)(1093MiB/10111msec); 0 zone resets 00:19:27.561 slat (usec): min=18, max=11301, avg=2282.66, stdev=3915.85 00:19:27.561 clat (msec): min=14, max=250, avg=145.71, stdev=12.81 00:19:27.561 lat (msec): min=14, max=250, avg=147.99, stdev=12.42 00:19:27.561 clat percentiles (msec): 00:19:27.561 | 1.00th=[ 101], 5.00th=[ 134], 10.00th=[ 138], 20.00th=[ 142], 00:19:27.561 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 150], 00:19:27.561 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:19:27.561 | 99.00th=[ 157], 99.50th=[ 199], 99.90th=[ 243], 99.95th=[ 245], 00:19:27.561 | 99.99th=[ 251] 00:19:27.561 bw ( KiB/s): min=106496, max=116736, per=8.71%, avg=110259.20, stdev=3108.94, samples=20 00:19:27.561 iops : min= 416, max= 456, avg=430.70, stdev=12.14, samples=20 00:19:27.561 lat (msec) : 20=0.09%, 50=0.37%, 100=0.55%, 250=98.95%, 500=0.05% 00:19:27.561 cpu : usr=0.79%, sys=1.29%, ctx=4083, majf=0, minf=1 00:19:27.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:27.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.561 issued rwts: total=0,4370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.561 job10: (groupid=0, jobs=1): err= 0: pid=78557: Sat Jul 13 03:08:32 2024 00:19:27.561 write: IOPS=432, BW=108MiB/s (113MB/s)(1093MiB/10101msec); 0 zone resets 00:19:27.561 slat (usec): min=16, max=12264, avg=2281.51, stdev=3909.69 00:19:27.561 clat (msec): min=13, max=239, avg=145.54, stdev=12.59 00:19:27.561 lat (msec): min=13, max=239, avg=147.82, stdev=12.20 00:19:27.561 clat percentiles (msec): 00:19:27.561 | 1.00th=[ 99], 5.00th=[ 134], 10.00th=[ 138], 20.00th=[ 142], 00:19:27.561 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 150], 00:19:27.561 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:19:27.561 | 99.00th=[ 157], 99.50th=[ 197], 99.90th=[ 232], 99.95th=[ 232], 00:19:27.561 | 99.99th=[ 241] 00:19:27.561 bw ( KiB/s): min=106496, max=116736, per=8.71%, avg=110321.65, stdev=3353.57, samples=20 00:19:27.561 iops : min= 416, max= 456, avg=430.90, stdev=13.07, samples=20 00:19:27.561 lat (msec) : 20=0.09%, 50=0.37%, 100=0.57%, 250=98.97% 00:19:27.561 cpu : usr=0.80%, sys=1.30%, ctx=5655, majf=0, minf=1 00:19:27.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:27.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.561 issued rwts: total=0,4372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.561 00:19:27.561 Run status group 0 (all jobs): 00:19:27.561 WRITE: bw=1237MiB/s (1297MB/s), 82.7MiB/s-147MiB/s (86.7MB/s-155MB/s), io=12.3GiB (13.2GB), run=10069-10151msec 00:19:27.561 00:19:27.561 Disk stats (read/write): 00:19:27.561 nvme0n1: ios=49/11807, merge=0/0, ticks=40/1214875, in_queue=1214915, util=97.83% 00:19:27.561 nvme10n1: ios=49/8711, merge=0/0, ticks=52/1216751, in_queue=1216803, util=98.13% 00:19:27.561 nvme1n1: ios=40/6656, merge=0/0, ticks=72/1211387, in_queue=1211459, util=98.16% 00:19:27.561 nvme2n1: ios=27/11820, merge=0/0, ticks=46/1216211, in_queue=1216257, util=98.20% 00:19:27.561 nvme3n1: ios=25/6657, merge=0/0, ticks=27/1211664, in_queue=1211691, util=98.11% 00:19:27.561 nvme4n1: ios=20/6575, merge=0/0, ticks=57/1210739, in_queue=1210796, util=98.21% 00:19:27.561 nvme5n1: ios=0/11429, merge=0/0, ticks=0/1214042, in_queue=1214042, util=98.28% 00:19:27.561 nvme6n1: ios=0/6588, merge=0/0, ticks=0/1211185, in_queue=1211185, util=98.33% 00:19:27.561 nvme7n1: ios=0/11516, merge=0/0, ticks=0/1215262, in_queue=1215262, util=98.70% 00:19:27.561 nvme8n1: ios=0/8615, merge=0/0, ticks=0/1214834, in_queue=1214834, util=98.82% 00:19:27.561 nvme9n1: ios=0/8600, merge=0/0, ticks=0/1212125, in_queue=1212125, util=98.72% 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:27.561 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:27.561 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:27.562 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:27.562 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:27.562 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:27.562 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:27.562 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.562 rmmod nvme_tcp 00:19:27.562 rmmod nvme_fabrics 00:19:27.562 rmmod nvme_keyring 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 77870 ']' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 77870 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 77870 ']' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 77870 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77870 00:19:27.562 killing process with pid 77870 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77870' 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 77870 00:19:27.562 03:08:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 77870 00:19:30.845 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.845 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.845 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:30.846 ************************************ 00:19:30.846 END TEST nvmf_multiconnection 00:19:30.846 ************************************ 00:19:30.846 00:19:30.846 real 0m51.982s 00:19:30.846 user 2m50.867s 00:19:30.846 sys 0m32.974s 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.846 03:08:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.846 03:08:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:30.846 03:08:36 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:30.846 03:08:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:30.846 03:08:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.846 03:08:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.846 ************************************ 00:19:30.846 START TEST nvmf_initiator_timeout 00:19:30.846 ************************************ 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:30.846 * Looking for test storage... 00:19:30.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.846 Cannot find device "nvmf_tgt_br" 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.846 Cannot find device "nvmf_tgt_br2" 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.846 03:08:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.846 Cannot find device "nvmf_tgt_br" 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.846 Cannot find device "nvmf_tgt_br2" 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.846 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:30.847 00:19:30.847 --- 10.0.0.2 ping statistics --- 00:19:30.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.847 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:30.847 00:19:30.847 --- 10.0.0.3 ping statistics --- 00:19:30.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.847 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:30.847 00:19:30.847 --- 10.0.0.1 ping statistics --- 00:19:30.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.847 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=78953 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 78953 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 78953 ']' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.847 03:08:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:31.105 [2024-07-13 03:08:37.452292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:31.105 [2024-07-13 03:08:37.452470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.364 [2024-07-13 03:08:37.633689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.622 [2024-07-13 03:08:37.886794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.622 [2024-07-13 03:08:37.886889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.622 [2024-07-13 03:08:37.886948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.622 [2024-07-13 03:08:37.886963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.622 [2024-07-13 03:08:37.886977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.622 [2024-07-13 03:08:37.887096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.623 [2024-07-13 03:08:37.887247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.623 [2024-07-13 03:08:37.888022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.623 [2024-07-13 03:08:37.888023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.881 [2024-07-13 03:08:38.118918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 Malloc0 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 Delay0 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 [2024-07-13 03:08:38.536953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.139 [2024-07-13 03:08:38.569153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.139 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:32.397 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:32.397 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:19:32.397 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:32.398 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:32.398 03:08:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=79020 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:34.299 03:08:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:34.299 [global] 00:19:34.299 thread=1 00:19:34.299 invalidate=1 00:19:34.299 rw=write 00:19:34.299 time_based=1 00:19:34.299 runtime=60 00:19:34.299 ioengine=libaio 00:19:34.299 direct=1 00:19:34.299 bs=4096 00:19:34.299 iodepth=1 00:19:34.299 norandommap=0 00:19:34.299 numjobs=1 00:19:34.299 00:19:34.299 verify_dump=1 00:19:34.299 verify_backlog=512 00:19:34.299 verify_state_save=0 00:19:34.299 do_verify=1 00:19:34.299 verify=crc32c-intel 00:19:34.299 [job0] 00:19:34.299 filename=/dev/nvme0n1 00:19:34.299 Could not set queue depth (nvme0n1) 00:19:34.557 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.557 fio-3.35 00:19:34.557 Starting 1 thread 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.842 true 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.842 true 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.842 true 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.842 true 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.842 03:08:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.370 true 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.370 true 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.370 true 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.370 true 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:40.370 03:08:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 79020 00:20:36.629 00:20:36.629 job0: (groupid=0, jobs=1): err= 0: pid=79041: Sat Jul 13 03:09:40 2024 00:20:36.629 read: IOPS=665, BW=2662KiB/s (2726kB/s)(156MiB/60000msec) 00:20:36.629 slat (usec): min=11, max=103, avg=16.10, stdev= 4.39 00:20:36.629 clat (usec): min=196, max=1530, avg=250.05, stdev=27.56 00:20:36.629 lat (usec): min=208, max=1545, avg=266.15, stdev=28.54 00:20:36.629 clat percentiles (usec): 00:20:36.629 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:20:36.630 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:20:36.630 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:20:36.630 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 379], 99.95th=[ 449], 00:20:36.630 | 99.99th=[ 832] 00:20:36.630 write: IOPS=668, BW=2672KiB/s (2736kB/s)(157MiB/60000msec); 0 zone resets 00:20:36.630 slat (usec): min=14, max=8836, avg=24.07, stdev=58.58 00:20:36.630 clat (usec): min=125, max=40654k, avg=1203.91, stdev=203065.81 00:20:36.630 lat (usec): min=159, max=40654k, avg=1227.98, stdev=203065.80 00:20:36.630 clat percentiles (usec): 00:20:36.630 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:20:36.630 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:20:36.630 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 241], 00:20:36.630 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 322], 00:20:36.630 | 99.99th=[ 594] 00:20:36.630 bw ( KiB/s): min= 272, max= 9280, per=100.00%, avg=8044.92, stdev=1440.63, samples=39 00:20:36.630 iops : min= 68, max= 2320, avg=2011.23, stdev=360.16, samples=39 00:20:36.630 lat (usec) : 250=78.24%, 500=21.74%, 750=0.01%, 1000=0.01% 00:20:36.630 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:36.630 cpu : usr=0.56%, sys=2.06%, ctx=80034, majf=0, minf=2 00:20:36.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.630 issued rwts: total=39936,40080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:36.630 00:20:36.630 Run status group 0 (all jobs): 00:20:36.630 READ: bw=2662KiB/s (2726kB/s), 2662KiB/s-2662KiB/s (2726kB/s-2726kB/s), io=156MiB (164MB), run=60000-60000msec 00:20:36.630 WRITE: bw=2672KiB/s (2736kB/s), 2672KiB/s-2672KiB/s (2736kB/s-2736kB/s), io=157MiB (164MB), run=60000-60000msec 00:20:36.630 00:20:36.630 Disk stats (read/write): 00:20:36.630 nvme0n1: ios=39916/39936, merge=0/0, ticks=10450/8237, in_queue=18687, util=99.88% 00:20:36.630 03:09:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:36.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:36.630 nvmf hotplug test: fio successful as expected 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.630 rmmod nvme_tcp 00:20:36.630 rmmod nvme_fabrics 00:20:36.630 rmmod nvme_keyring 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 78953 ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 78953 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 78953 ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 78953 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78953 00:20:36.630 killing process with pid 78953 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78953' 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 78953 00:20:36.630 03:09:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 78953 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:36.630 00:20:36.630 real 1m5.531s 00:20:36.630 user 3m54.633s 00:20:36.630 sys 0m21.924s 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.630 ************************************ 00:20:36.630 END TEST nvmf_initiator_timeout 00:20:36.630 ************************************ 00:20:36.630 03:09:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:36.630 03:09:42 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:20:36.630 03:09:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 03:09:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 03:09:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:20:36.630 03:09:42 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.630 03:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.630 ************************************ 00:20:36.630 START TEST nvmf_identify 00:20:36.630 ************************************ 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:36.630 * Looking for test storage... 00:20:36.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:36.630 Cannot find device "nvmf_tgt_br" 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.630 Cannot find device "nvmf_tgt_br2" 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:36.630 Cannot find device "nvmf_tgt_br" 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:36.630 Cannot find device "nvmf_tgt_br2" 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.630 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:36.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:36.631 00:20:36.631 --- 10.0.0.2 ping statistics --- 00:20:36.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.631 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:36.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:36.631 00:20:36.631 --- 10.0.0.3 ping statistics --- 00:20:36.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.631 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:36.631 00:20:36.631 --- 10.0.0.1 ping statistics --- 00:20:36.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.631 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79875 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79875 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 79875 ']' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.631 03:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.631 [2024-07-13 03:09:43.063707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:36.631 [2024-07-13 03:09:43.063956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.888 [2024-07-13 03:09:43.236396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.146 [2024-07-13 03:09:43.411477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.146 [2024-07-13 03:09:43.411547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.146 [2024-07-13 03:09:43.411561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.146 [2024-07-13 03:09:43.411574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.146 [2024-07-13 03:09:43.411587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.146 [2024-07-13 03:09:43.411773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.146 [2024-07-13 03:09:43.412020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.146 [2024-07-13 03:09:43.412750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.146 [2024-07-13 03:09:43.412758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.146 [2024-07-13 03:09:43.589906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 [2024-07-13 03:09:43.929102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 Malloc0 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 [2024-07-13 03:09:44.076112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.713 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 [ 00:20:37.713 { 00:20:37.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:37.713 "subtype": "Discovery", 00:20:37.713 "listen_addresses": [ 00:20:37.713 { 00:20:37.713 "trtype": "TCP", 00:20:37.713 "adrfam": "IPv4", 00:20:37.713 "traddr": "10.0.0.2", 00:20:37.713 "trsvcid": "4420" 00:20:37.713 } 00:20:37.713 ], 00:20:37.713 "allow_any_host": true, 00:20:37.713 "hosts": [] 00:20:37.713 }, 00:20:37.713 { 00:20:37.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.714 "subtype": "NVMe", 00:20:37.714 "listen_addresses": [ 00:20:37.714 { 00:20:37.714 "trtype": "TCP", 00:20:37.714 "adrfam": "IPv4", 00:20:37.714 "traddr": "10.0.0.2", 00:20:37.714 "trsvcid": "4420" 00:20:37.714 } 00:20:37.714 ], 00:20:37.714 "allow_any_host": true, 00:20:37.714 "hosts": [], 00:20:37.714 "serial_number": "SPDK00000000000001", 00:20:37.714 "model_number": "SPDK bdev Controller", 00:20:37.714 "max_namespaces": 32, 00:20:37.714 "min_cntlid": 1, 00:20:37.714 "max_cntlid": 65519, 00:20:37.714 "namespaces": [ 00:20:37.714 { 00:20:37.714 "nsid": 1, 00:20:37.714 "bdev_name": "Malloc0", 00:20:37.714 "name": "Malloc0", 00:20:37.714 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:37.714 "eui64": "ABCDEF0123456789", 00:20:37.714 "uuid": "ddcdb7a3-68fb-4473-ba32-23a9e16c7f12" 00:20:37.714 } 00:20:37.714 ] 00:20:37.714 } 00:20:37.714 ] 00:20:37.714 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.714 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:37.714 [2024-07-13 03:09:44.147823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:37.714 [2024-07-13 03:09:44.148189] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79910 ] 00:20:37.975 [2024-07-13 03:09:44.305352] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:37.975 [2024-07-13 03:09:44.305521] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:37.975 [2024-07-13 03:09:44.305537] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:37.975 [2024-07-13 03:09:44.305565] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:37.975 [2024-07-13 03:09:44.305582] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:37.975 [2024-07-13 03:09:44.305790] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:37.975 [2024-07-13 03:09:44.305889] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:37.975 [2024-07-13 03:09:44.318058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:37.975 [2024-07-13 03:09:44.318118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:37.975 [2024-07-13 03:09:44.318133] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:37.975 [2024-07-13 03:09:44.318160] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:37.975 [2024-07-13 03:09:44.318245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.318261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.318286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.975 [2024-07-13 03:09:44.318312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:37.975 [2024-07-13 03:09:44.318356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.975 [2024-07-13 03:09:44.325993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.975 [2024-07-13 03:09:44.326042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.975 [2024-07-13 03:09:44.326052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.975 [2024-07-13 03:09:44.326107] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:37.975 [2024-07-13 03:09:44.326128] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:37.975 [2024-07-13 03:09:44.326142] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:37.975 [2024-07-13 03:09:44.326160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.975 [2024-07-13 03:09:44.326192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.975 [2024-07-13 03:09:44.326229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.975 [2024-07-13 03:09:44.326345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.975 [2024-07-13 03:09:44.326362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.975 [2024-07-13 03:09:44.326370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.975 [2024-07-13 03:09:44.326393] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:37.975 [2024-07-13 03:09:44.326411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:37.975 [2024-07-13 03:09:44.326425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.975 [2024-07-13 03:09:44.326467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.975 [2024-07-13 03:09:44.326519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.975 [2024-07-13 03:09:44.326595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.975 [2024-07-13 03:09:44.326618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.975 [2024-07-13 03:09:44.326626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.975 [2024-07-13 03:09:44.326645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:37.975 [2024-07-13 03:09:44.326661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:37.975 [2024-07-13 03:09:44.326679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.975 [2024-07-13 03:09:44.326713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.975 [2024-07-13 03:09:44.326743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.975 [2024-07-13 03:09:44.326805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.975 [2024-07-13 03:09:44.326818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.975 [2024-07-13 03:09:44.326830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.975 [2024-07-13 03:09:44.326865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:37.975 [2024-07-13 03:09:44.326894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.326915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.975 [2024-07-13 03:09:44.326929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.975 [2024-07-13 03:09:44.326961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.975 [2024-07-13 03:09:44.327044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.975 [2024-07-13 03:09:44.327059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.975 [2024-07-13 03:09:44.327066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.975 [2024-07-13 03:09:44.327073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.975 [2024-07-13 03:09:44.327083] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:37.975 [2024-07-13 03:09:44.327093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:37.975 [2024-07-13 03:09:44.327107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:37.975 [2024-07-13 03:09:44.327217] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:37.976 [2024-07-13 03:09:44.327226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:37.976 [2024-07-13 03:09:44.327242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.327280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.976 [2024-07-13 03:09:44.327311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.976 [2024-07-13 03:09:44.327382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.976 [2024-07-13 03:09:44.327394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.976 [2024-07-13 03:09:44.327401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.976 [2024-07-13 03:09:44.327422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:37.976 [2024-07-13 03:09:44.327440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.327470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.976 [2024-07-13 03:09:44.327515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.976 [2024-07-13 03:09:44.327587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.976 [2024-07-13 03:09:44.327600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.976 [2024-07-13 03:09:44.327607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.976 [2024-07-13 03:09:44.327631] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:37.976 [2024-07-13 03:09:44.327643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.327658] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:37.976 [2024-07-13 03:09:44.327677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.327698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.327723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.976 [2024-07-13 03:09:44.327774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.976 [2024-07-13 03:09:44.327936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.976 [2024-07-13 03:09:44.327975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.976 [2024-07-13 03:09:44.327986] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.327994] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:37.976 [2024-07-13 03:09:44.328004] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:37.976 [2024-07-13 03:09:44.328012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328034] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328043] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.976 [2024-07-13 03:09:44.328069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.976 [2024-07-13 03:09:44.328075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.976 [2024-07-13 03:09:44.328106] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:37.976 [2024-07-13 03:09:44.328117] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:37.976 [2024-07-13 03:09:44.328126] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:37.976 [2024-07-13 03:09:44.328136] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:37.976 [2024-07-13 03:09:44.328144] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:37.976 [2024-07-13 03:09:44.328171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.328190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.328209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.976 [2024-07-13 03:09:44.328276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.976 [2024-07-13 03:09:44.328364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.976 [2024-07-13 03:09:44.328376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.976 [2024-07-13 03:09:44.328383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.976 [2024-07-13 03:09:44.328404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.976 [2024-07-13 03:09:44.328454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.976 [2024-07-13 03:09:44.328508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.976 [2024-07-13 03:09:44.328545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.976 [2024-07-13 03:09:44.328582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.328606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:37.976 [2024-07-13 03:09:44.328620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.976 [2024-07-13 03:09:44.328629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:37.976 [2024-07-13 03:09:44.328643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.976 [2024-07-13 03:09:44.328675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:37.976 [2024-07-13 03:09:44.328687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:37.976 [2024-07-13 03:09:44.328696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:37.976 [2024-07-13 03:09:44.328704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.977 [2024-07-13 03:09:44.328712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:37.977 [2024-07-13 03:09:44.328832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.328845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.328863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.328871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:37.977 [2024-07-13 03:09:44.328881] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:37.977 [2024-07-13 03:09:44.328906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:37.977 [2024-07-13 03:09:44.328986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:37.977 [2024-07-13 03:09:44.329020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.977 [2024-07-13 03:09:44.329053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:37.977 [2024-07-13 03:09:44.329150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.977 [2024-07-13 03:09:44.329176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.977 [2024-07-13 03:09:44.329187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:37.977 [2024-07-13 03:09:44.329204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:37.977 [2024-07-13 03:09:44.329213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329236] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.329267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.329275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:37.977 [2024-07-13 03:09:44.329315] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:37.977 [2024-07-13 03:09:44.329410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:37.977 [2024-07-13 03:09:44.329442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.977 [2024-07-13 03:09:44.329455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.329470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:37.977 [2024-07-13 03:09:44.329500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.977 [2024-07-13 03:09:44.329543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:37.977 [2024-07-13 03:09:44.329560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:37.977 [2024-07-13 03:09:44.333988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.977 [2024-07-13 03:09:44.334029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.977 [2024-07-13 03:09:44.334040] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334048] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:37.977 [2024-07-13 03:09:44.334057] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:37.977 [2024-07-13 03:09:44.334065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334078] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334086] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.334106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.334112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:37.977 [2024-07-13 03:09:44.334136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.334147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.334153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:37.977 [2024-07-13 03:09:44.334188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:37.977 [2024-07-13 03:09:44.334219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.977 [2024-07-13 03:09:44.334262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:37.977 [2024-07-13 03:09:44.334383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.977 [2024-07-13 03:09:44.334395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.977 [2024-07-13 03:09:44.334402] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334409] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:37.977 [2024-07-13 03:09:44.334420] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:37.977 [2024-07-13 03:09:44.334428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334440] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334448] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.334470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.334493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:37.977 [2024-07-13 03:09:44.334526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:37.977 [2024-07-13 03:09:44.334556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.977 [2024-07-13 03:09:44.334595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:37.977 [2024-07-13 03:09:44.334711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:37.977 [2024-07-13 03:09:44.334726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:37.977 [2024-07-13 03:09:44.334733] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334740] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:37.977 [2024-07-13 03:09:44.334749] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:37.977 [2024-07-13 03:09:44.334768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334793] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.977 [2024-07-13 03:09:44.334835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.977 [2024-07-13 03:09:44.334841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.977 [2024-07-13 03:09:44.334864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:37.977 ===================================================== 00:20:37.977 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:37.977 ===================================================== 00:20:37.977 Controller Capabilities/Features 00:20:37.977 ================================ 00:20:37.977 Vendor ID: 0000 00:20:37.977 Subsystem Vendor ID: 0000 00:20:37.977 Serial Number: .................... 00:20:37.977 Model Number: ........................................ 00:20:37.977 Firmware Version: 24.09 00:20:37.977 Recommended Arb Burst: 0 00:20:37.977 IEEE OUI Identifier: 00 00 00 00:20:37.977 Multi-path I/O 00:20:37.977 May have multiple subsystem ports: No 00:20:37.978 May have multiple controllers: No 00:20:37.978 Associated with SR-IOV VF: No 00:20:37.978 Max Data Transfer Size: 131072 00:20:37.978 Max Number of Namespaces: 0 00:20:37.978 Max Number of I/O Queues: 1024 00:20:37.978 NVMe Specification Version (VS): 1.3 00:20:37.978 NVMe Specification Version (Identify): 1.3 00:20:37.978 Maximum Queue Entries: 128 00:20:37.978 Contiguous Queues Required: Yes 00:20:37.978 Arbitration Mechanisms Supported 00:20:37.978 Weighted Round Robin: Not Supported 00:20:37.978 Vendor Specific: Not Supported 00:20:37.978 Reset Timeout: 15000 ms 00:20:37.978 Doorbell Stride: 4 bytes 00:20:37.978 NVM Subsystem Reset: Not Supported 00:20:37.978 Command Sets Supported 00:20:37.978 NVM Command Set: Supported 00:20:37.978 Boot Partition: Not Supported 00:20:37.978 Memory Page Size Minimum: 4096 bytes 00:20:37.978 Memory Page Size Maximum: 4096 bytes 00:20:37.978 Persistent Memory Region: Not Supported 00:20:37.978 Optional Asynchronous Events Supported 00:20:37.978 Namespace Attribute Notices: Not Supported 00:20:37.978 Firmware Activation Notices: Not Supported 00:20:37.978 ANA Change Notices: Not Supported 00:20:37.978 PLE Aggregate Log Change Notices: Not Supported 00:20:37.978 LBA Status Info Alert Notices: Not Supported 00:20:37.978 EGE Aggregate Log Change Notices: Not Supported 00:20:37.978 Normal NVM Subsystem Shutdown event: Not Supported 00:20:37.978 Zone Descriptor Change Notices: Not Supported 00:20:37.978 Discovery Log Change Notices: Supported 00:20:37.978 Controller Attributes 00:20:37.978 128-bit Host Identifier: Not Supported 00:20:37.978 Non-Operational Permissive Mode: Not Supported 00:20:37.978 NVM Sets: Not Supported 00:20:37.978 Read Recovery Levels: Not Supported 00:20:37.978 Endurance Groups: Not Supported 00:20:37.978 Predictable Latency Mode: Not Supported 00:20:37.978 Traffic Based Keep ALive: Not Supported 00:20:37.978 Namespace Granularity: Not Supported 00:20:37.978 SQ Associations: Not Supported 00:20:37.978 UUID List: Not Supported 00:20:37.978 Multi-Domain Subsystem: Not Supported 00:20:37.978 Fixed Capacity Management: Not Supported 00:20:37.978 Variable Capacity Management: Not Supported 00:20:37.978 Delete Endurance Group: Not Supported 00:20:37.978 Delete NVM Set: Not Supported 00:20:37.978 Extended LBA Formats Supported: Not Supported 00:20:37.978 Flexible Data Placement Supported: Not Supported 00:20:37.978 00:20:37.978 Controller Memory Buffer Support 00:20:37.978 ================================ 00:20:37.978 Supported: No 00:20:37.978 00:20:37.978 Persistent Memory Region Support 00:20:37.978 ================================ 00:20:37.978 Supported: No 00:20:37.978 00:20:37.978 Admin Command Set Attributes 00:20:37.978 ============================ 00:20:37.978 Security Send/Receive: Not Supported 00:20:37.978 Format NVM: Not Supported 00:20:37.978 Firmware Activate/Download: Not Supported 00:20:37.978 Namespace Management: Not Supported 00:20:37.978 Device Self-Test: Not Supported 00:20:37.978 Directives: Not Supported 00:20:37.978 NVMe-MI: Not Supported 00:20:37.978 Virtualization Management: Not Supported 00:20:37.978 Doorbell Buffer Config: Not Supported 00:20:37.978 Get LBA Status Capability: Not Supported 00:20:37.978 Command & Feature Lockdown Capability: Not Supported 00:20:37.978 Abort Command Limit: 1 00:20:37.978 Async Event Request Limit: 4 00:20:37.978 Number of Firmware Slots: N/A 00:20:37.978 Firmware Slot 1 Read-Only: N/A 00:20:37.978 Firmware Activation Without Reset: N/A 00:20:37.978 Multiple Update Detection Support: N/A 00:20:37.978 Firmware Update Granularity: No Information Provided 00:20:37.978 Per-Namespace SMART Log: No 00:20:37.978 Asymmetric Namespace Access Log Page: Not Supported 00:20:37.978 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:37.978 Command Effects Log Page: Not Supported 00:20:37.978 Get Log Page Extended Data: Supported 00:20:37.978 Telemetry Log Pages: Not Supported 00:20:37.978 Persistent Event Log Pages: Not Supported 00:20:37.978 Supported Log Pages Log Page: May Support 00:20:37.978 Commands Supported & Effects Log Page: Not Supported 00:20:37.978 Feature Identifiers & Effects Log Page:May Support 00:20:37.978 NVMe-MI Commands & Effects Log Page: May Support 00:20:37.978 Data Area 4 for Telemetry Log: Not Supported 00:20:37.978 Error Log Page Entries Supported: 128 00:20:37.978 Keep Alive: Not Supported 00:20:37.978 00:20:37.978 NVM Command Set Attributes 00:20:37.978 ========================== 00:20:37.978 Submission Queue Entry Size 00:20:37.978 Max: 1 00:20:37.978 Min: 1 00:20:37.978 Completion Queue Entry Size 00:20:37.978 Max: 1 00:20:37.978 Min: 1 00:20:37.978 Number of Namespaces: 0 00:20:37.978 Compare Command: Not Supported 00:20:37.978 Write Uncorrectable Command: Not Supported 00:20:37.978 Dataset Management Command: Not Supported 00:20:37.978 Write Zeroes Command: Not Supported 00:20:37.978 Set Features Save Field: Not Supported 00:20:37.978 Reservations: Not Supported 00:20:37.978 Timestamp: Not Supported 00:20:37.978 Copy: Not Supported 00:20:37.978 Volatile Write Cache: Not Present 00:20:37.978 Atomic Write Unit (Normal): 1 00:20:37.978 Atomic Write Unit (PFail): 1 00:20:37.978 Atomic Compare & Write Unit: 1 00:20:37.978 Fused Compare & Write: Supported 00:20:37.978 Scatter-Gather List 00:20:37.978 SGL Command Set: Supported 00:20:37.978 SGL Keyed: Supported 00:20:37.978 SGL Bit Bucket Descriptor: Not Supported 00:20:37.978 SGL Metadata Pointer: Not Supported 00:20:37.978 Oversized SGL: Not Supported 00:20:37.978 SGL Metadata Address: Not Supported 00:20:37.978 SGL Offset: Supported 00:20:37.978 Transport SGL Data Block: Not Supported 00:20:37.978 Replay Protected Memory Block: Not Supported 00:20:37.978 00:20:37.978 Firmware Slot Information 00:20:37.978 ========================= 00:20:37.978 Active slot: 0 00:20:37.978 00:20:37.978 00:20:37.978 Error Log 00:20:37.978 ========= 00:20:37.978 00:20:37.978 Active Namespaces 00:20:37.978 ================= 00:20:37.978 Discovery Log Page 00:20:37.978 ================== 00:20:37.978 Generation Counter: 2 00:20:37.978 Number of Records: 2 00:20:37.978 Record Format: 0 00:20:37.978 00:20:37.978 Discovery Log Entry 0 00:20:37.979 ---------------------- 00:20:37.979 Transport Type: 3 (TCP) 00:20:37.979 Address Family: 1 (IPv4) 00:20:37.979 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:37.979 Entry Flags: 00:20:37.979 Duplicate Returned Information: 1 00:20:37.979 Explicit Persistent Connection Support for Discovery: 1 00:20:37.979 Transport Requirements: 00:20:37.979 Secure Channel: Not Required 00:20:37.979 Port ID: 0 (0x0000) 00:20:37.979 Controller ID: 65535 (0xffff) 00:20:37.979 Admin Max SQ Size: 128 00:20:37.979 Transport Service Identifier: 4420 00:20:37.979 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:37.979 Transport Address: 10.0.0.2 00:20:37.979 Discovery Log Entry 1 00:20:37.979 ---------------------- 00:20:37.979 Transport Type: 3 (TCP) 00:20:37.979 Address Family: 1 (IPv4) 00:20:37.979 Subsystem Type: 2 (NVM Subsystem) 00:20:37.979 Entry Flags: 00:20:37.979 Duplicate Returned Information: 0 00:20:37.979 Explicit Persistent Connection Support for Discovery: 0 00:20:37.979 Transport Requirements: 00:20:37.979 Secure Channel: Not Required 00:20:37.979 Port ID: 0 (0x0000) 00:20:37.979 Controller ID: 65535 (0xffff) 00:20:37.979 Admin Max SQ Size: 128 00:20:37.979 Transport Service Identifier: 4420 00:20:37.979 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:37.979 Transport Address: 10.0.0.2 [2024-07-13 03:09:44.335043] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:37.979 [2024-07-13 03:09:44.335067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.979 [2024-07-13 03:09:44.335091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.979 [2024-07-13 03:09:44.335109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.979 [2024-07-13 03:09:44.335126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.979 [2024-07-13 03:09:44.335157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.335193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.335228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.335294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.335311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.335319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.335376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.335411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.335526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.335539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.335546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335563] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:37.979 [2024-07-13 03:09:44.335576] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:37.979 [2024-07-13 03:09:44.335595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.335637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.335666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.335733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.335747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.335754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.335786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.335818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.335864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.335954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.335972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.335979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.335986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.336005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.336034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.336062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.336132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.336144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.336150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.336175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.979 [2024-07-13 03:09:44.336207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.979 [2024-07-13 03:09:44.336234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.979 [2024-07-13 03:09:44.336304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.979 [2024-07-13 03:09:44.336330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.979 [2024-07-13 03:09:44.336338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.979 [2024-07-13 03:09:44.336370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.979 [2024-07-13 03:09:44.336386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.336399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.336434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.336532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.336554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.336563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.336591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.336626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.336656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.336776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.336789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.336796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.336825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.336842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.336868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.336963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.337036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.337049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.337056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.337084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.337121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.337152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.337229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.337241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.337248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.337278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.337327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.337369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.337429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.337441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.337452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.337492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.337534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.337561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.337628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.337647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.337655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.337682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.337725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.337757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.337824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.337851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.337873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.980 [2024-07-13 03:09:44.337898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.337907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:37.980 [2024-07-13 03:09:44.342056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:37.980 [2024-07-13 03:09:44.342096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.980 [2024-07-13 03:09:44.342132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:37.980 [2024-07-13 03:09:44.342202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:37.980 [2024-07-13 03:09:44.342216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:37.980 [2024-07-13 03:09:44.342222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:37.981 [2024-07-13 03:09:44.342230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:37.981 [2024-07-13 03:09:44.342245] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:37.981 00:20:37.981 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:37.981 [2024-07-13 03:09:44.452408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:37.981 [2024-07-13 03:09:44.452543] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79913 ] 00:20:38.242 [2024-07-13 03:09:44.620869] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:38.242 [2024-07-13 03:09:44.625062] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:38.242 [2024-07-13 03:09:44.625085] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:38.242 [2024-07-13 03:09:44.625127] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:38.242 [2024-07-13 03:09:44.625145] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:38.242 [2024-07-13 03:09:44.625323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:38.242 [2024-07-13 03:09:44.625408] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:38.242 [2024-07-13 03:09:44.635909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:38.242 [2024-07-13 03:09:44.635948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:38.242 [2024-07-13 03:09:44.635966] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:38.242 [2024-07-13 03:09:44.635976] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:38.242 [2024-07-13 03:09:44.636070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.242 [2024-07-13 03:09:44.636102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.636111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.636135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:38.243 [2024-07-13 03:09:44.636178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.645986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.646020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.646028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.646061] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:38.243 [2024-07-13 03:09:44.646086] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:38.243 [2024-07-13 03:09:44.646098] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:38.243 [2024-07-13 03:09:44.646117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.646190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.646230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.646368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.646391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.646404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.646426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:38.243 [2024-07-13 03:09:44.646442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:38.243 [2024-07-13 03:09:44.646455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.646489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.646520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.646624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.646638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.646645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.646665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:38.243 [2024-07-13 03:09:44.646683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.646697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.646726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.646753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.646864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.646877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.646896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.646916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.646935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.646957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.646976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.647004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.647124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.647152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.647161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.647178] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:38.243 [2024-07-13 03:09:44.647187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.647212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.647322] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:38.243 [2024-07-13 03:09:44.647330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.647346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.647384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.647416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.647530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.647542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.647548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.647582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:38.243 [2024-07-13 03:09:44.647600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.647631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.647657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.647762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.647807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.647816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.647836] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:38.243 [2024-07-13 03:09:44.647847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:38.243 [2024-07-13 03:09:44.647862] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:38.243 [2024-07-13 03:09:44.647897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:38.243 [2024-07-13 03:09:44.647920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.647930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.243 [2024-07-13 03:09:44.647946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.243 [2024-07-13 03:09:44.647994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.243 [2024-07-13 03:09:44.648216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.243 [2024-07-13 03:09:44.648239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.243 [2024-07-13 03:09:44.648247] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.648271] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:38.243 [2024-07-13 03:09:44.648281] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:38.243 [2024-07-13 03:09:44.648289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.648307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.648317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.648334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.243 [2024-07-13 03:09:44.648345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.243 [2024-07-13 03:09:44.648351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.243 [2024-07-13 03:09:44.648358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.243 [2024-07-13 03:09:44.648378] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:38.243 [2024-07-13 03:09:44.648392] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:38.243 [2024-07-13 03:09:44.648401] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:38.243 [2024-07-13 03:09:44.648409] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:38.243 [2024-07-13 03:09:44.648418] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:38.243 [2024-07-13 03:09:44.648427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:38.243 [2024-07-13 03:09:44.648453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.648484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.244 [2024-07-13 03:09:44.648553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.244 [2024-07-13 03:09:44.648662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.648680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.244 [2024-07-13 03:09:44.648688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.244 [2024-07-13 03:09:44.648712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.244 [2024-07-13 03:09:44.648767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.244 [2024-07-13 03:09:44.648805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.244 [2024-07-13 03:09:44.648839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.244 [2024-07-13 03:09:44.648870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.648920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.648962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.648972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.648994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.244 [2024-07-13 03:09:44.649032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:38.244 [2024-07-13 03:09:44.649044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:38.244 [2024-07-13 03:09:44.649053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:38.244 [2024-07-13 03:09:44.649061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.244 [2024-07-13 03:09:44.649068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.244 [2024-07-13 03:09:44.649267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.649310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.244 [2024-07-13 03:09:44.649333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.649340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.244 [2024-07-13 03:09:44.649351] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:38.244 [2024-07-13 03:09:44.649361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.649376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.649387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.649398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.649406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.649413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.649432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.244 [2024-07-13 03:09:44.649461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.244 [2024-07-13 03:09:44.649560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.649572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.244 [2024-07-13 03:09:44.649578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.649585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.244 [2024-07-13 03:09:44.649674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.649705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.649724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.649733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.649747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.244 [2024-07-13 03:09:44.649775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.244 [2024-07-13 03:09:44.653976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.244 [2024-07-13 03:09:44.654008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.244 [2024-07-13 03:09:44.654017] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654024] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:38.244 [2024-07-13 03:09:44.654033] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:38.244 [2024-07-13 03:09:44.654040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654059] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654067] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.654094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.244 [2024-07-13 03:09:44.654101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.244 [2024-07-13 03:09:44.654164] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:38.244 [2024-07-13 03:09:44.654186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.654213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.654233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.654270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.244 [2024-07-13 03:09:44.654323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.244 [2024-07-13 03:09:44.654476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.244 [2024-07-13 03:09:44.654498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.244 [2024-07-13 03:09:44.654506] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654512] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:38.244 [2024-07-13 03:09:44.654520] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:38.244 [2024-07-13 03:09:44.654527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654549] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.654572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.244 [2024-07-13 03:09:44.654578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.244 [2024-07-13 03:09:44.654620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.654644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:38.244 [2024-07-13 03:09:44.654663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.244 [2024-07-13 03:09:44.654686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.244 [2024-07-13 03:09:44.654716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.244 [2024-07-13 03:09:44.654921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.244 [2024-07-13 03:09:44.654944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.244 [2024-07-13 03:09:44.654952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654958] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:38.244 [2024-07-13 03:09:44.654966] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:38.244 [2024-07-13 03:09:44.654973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.654995] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.655006] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.244 [2024-07-13 03:09:44.655019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.244 [2024-07-13 03:09:44.655029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.655035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.655074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655163] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:38.245 [2024-07-13 03:09:44.655174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:38.245 [2024-07-13 03:09:44.655184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:38.245 [2024-07-13 03:09:44.655222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.655248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.655261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.655287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:38.245 [2024-07-13 03:09:44.655329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.245 [2024-07-13 03:09:44.655342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:38.245 [2024-07-13 03:09:44.655474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.655497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.655505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.655525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.655535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.655540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.655569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.655590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.655619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:38.245 [2024-07-13 03:09:44.655722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.655734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.655740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.655766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.655787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.655812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:38.245 [2024-07-13 03:09:44.655926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.655939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.655945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.655968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.655976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.655995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.656026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:38.245 [2024-07-13 03:09:44.656126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.656144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.656151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.656190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.656215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.656229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.656252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.656268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.656288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.656306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:38.245 [2024-07-13 03:09:44.656327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.245 [2024-07-13 03:09:44.656356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:38.245 [2024-07-13 03:09:44.656368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:38.245 [2024-07-13 03:09:44.656376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:38.245 [2024-07-13 03:09:44.656383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:38.245 [2024-07-13 03:09:44.656664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.245 [2024-07-13 03:09:44.656687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.245 [2024-07-13 03:09:44.656695] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656703] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:38.245 [2024-07-13 03:09:44.656716] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:38.245 [2024-07-13 03:09:44.656724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656755] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656765] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.245 [2024-07-13 03:09:44.656791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.245 [2024-07-13 03:09:44.656797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656804] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:38.245 [2024-07-13 03:09:44.656811] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:38.245 [2024-07-13 03:09:44.656818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656835] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.245 [2024-07-13 03:09:44.656856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.245 [2024-07-13 03:09:44.656862] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656868] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:38.245 [2024-07-13 03:09:44.656876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:38.245 [2024-07-13 03:09:44.656895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656927] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656962] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:38.245 [2024-07-13 03:09:44.656981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:38.245 [2024-07-13 03:09:44.656987] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.656997] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:38.245 [2024-07-13 03:09:44.657006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:38.245 [2024-07-13 03:09:44.657013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.657024] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.657042] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.657051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.657060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.657066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.245 [2024-07-13 03:09:44.657074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:38.245 [2024-07-13 03:09:44.657102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.245 [2024-07-13 03:09:44.657118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.245 [2024-07-13 03:09:44.657124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.246 [2024-07-13 03:09:44.657131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:38.246 [2024-07-13 03:09:44.657148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.246 [2024-07-13 03:09:44.657158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.246 [2024-07-13 03:09:44.657164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.246 [2024-07-13 03:09:44.657171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:38.246 [2024-07-13 03:09:44.657184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.246 [2024-07-13 03:09:44.657194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.246 ===================================================== 00:20:38.246 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.246 ===================================================== 00:20:38.246 Controller Capabilities/Features 00:20:38.246 ================================ 00:20:38.246 Vendor ID: 8086 00:20:38.246 Subsystem Vendor ID: 8086 00:20:38.246 Serial Number: SPDK00000000000001 00:20:38.246 Model Number: SPDK bdev Controller 00:20:38.246 Firmware Version: 24.09 00:20:38.246 Recommended Arb Burst: 6 00:20:38.246 IEEE OUI Identifier: e4 d2 5c 00:20:38.246 Multi-path I/O 00:20:38.246 May have multiple subsystem ports: Yes 00:20:38.246 May have multiple controllers: Yes 00:20:38.246 Associated with SR-IOV VF: No 00:20:38.246 Max Data Transfer Size: 131072 00:20:38.246 Max Number of Namespaces: 32 00:20:38.246 Max Number of I/O Queues: 127 00:20:38.246 NVMe Specification Version (VS): 1.3 00:20:38.246 NVMe Specification Version (Identify): 1.3 00:20:38.246 Maximum Queue Entries: 128 00:20:38.246 Contiguous Queues Required: Yes 00:20:38.246 Arbitration Mechanisms Supported 00:20:38.246 Weighted Round Robin: Not Supported 00:20:38.246 Vendor Specific: Not Supported 00:20:38.246 Reset Timeout: 15000 ms 00:20:38.246 Doorbell Stride: 4 bytes 00:20:38.246 NVM Subsystem Reset: Not Supported 00:20:38.246 Command Sets Supported 00:20:38.246 NVM Command Set: Supported 00:20:38.246 Boot Partition: Not Supported 00:20:38.246 Memory Page Size Minimum: 4096 bytes 00:20:38.246 Memory Page Size Maximum: 4096 bytes 00:20:38.246 Persistent Memory Region: Not Supported 00:20:38.246 Optional Asynchronous Events Supported 00:20:38.246 Namespace Attribute Notices: Supported 00:20:38.246 Firmware Activation Notices: Not Supported 00:20:38.246 ANA Change Notices: Not Supported 00:20:38.246 PLE Aggregate Log Change Notices: Not Supported 00:20:38.246 LBA Status Info Alert Notices: Not Supported 00:20:38.246 EGE Aggregate Log Change Notices: Not Supported 00:20:38.246 Normal NVM Subsystem Shutdown event: Not Supported 00:20:38.246 Zone Descriptor Change Notices: Not Supported 00:20:38.246 Discovery Log Change Notices: Not Supported 00:20:38.246 Controller Attributes 00:20:38.246 128-bit Host Identifier: Supported 00:20:38.246 Non-Operational Permissive Mode: Not Supported 00:20:38.246 NVM Sets: Not Supported 00:20:38.246 Read Recovery Levels: Not Supported 00:20:38.246 Endurance Groups: Not Supported 00:20:38.246 Predictable Latency Mode: Not Supported 00:20:38.246 Traffic Based Keep ALive: Not Supported 00:20:38.246 Namespace Granularity: Not Supported 00:20:38.246 SQ Associations: Not Supported 00:20:38.246 UUID List: Not Supported 00:20:38.246 Multi-Domain Subsystem: Not Supported 00:20:38.246 Fixed Capacity Management: Not Supported 00:20:38.246 Variable Capacity Management: Not Supported 00:20:38.246 Delete Endurance Group: Not Supported 00:20:38.246 Delete NVM Set: Not Supported 00:20:38.246 Extended LBA Formats Supported: Not Supported 00:20:38.246 Flexible Data Placement Supported: Not Supported 00:20:38.246 00:20:38.246 Controller Memory Buffer Support 00:20:38.246 ================================ 00:20:38.246 Supported: No 00:20:38.246 00:20:38.246 Persistent Memory Region Support 00:20:38.246 ================================ 00:20:38.246 Supported: No 00:20:38.246 00:20:38.246 Admin Command Set Attributes 00:20:38.246 ============================ 00:20:38.246 Security Send/Receive: Not Supported 00:20:38.246 Format NVM: Not Supported 00:20:38.246 Firmware Activate/Download: Not Supported 00:20:38.246 Namespace Management: Not Supported 00:20:38.246 Device Self-Test: Not Supported 00:20:38.246 Directives: Not Supported 00:20:38.246 NVMe-MI: Not Supported 00:20:38.246 Virtualization Management: Not Supported 00:20:38.246 Doorbell Buffer Config: Not Supported 00:20:38.246 Get LBA Status Capability: Not Supported 00:20:38.246 Command & Feature Lockdown Capability: Not Supported 00:20:38.246 Abort Command Limit: 4 00:20:38.246 Async Event Request Limit: 4 00:20:38.246 Number of Firmware Slots: N/A 00:20:38.246 Firmware Slot 1 Read-Only: N/A 00:20:38.246 Firmware Activation Without Reset: [2024-07-13 03:09:44.657200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.246 [2024-07-13 03:09:44.657207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:38.246 N/A 00:20:38.246 Multiple Update Detection Support: N/A 00:20:38.246 Firmware Update Granularity: No Information Provided 00:20:38.246 Per-Namespace SMART Log: No 00:20:38.246 Asymmetric Namespace Access Log Page: Not Supported 00:20:38.246 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:38.246 Command Effects Log Page: Supported 00:20:38.246 Get Log Page Extended Data: Supported 00:20:38.246 Telemetry Log Pages: Not Supported 00:20:38.246 Persistent Event Log Pages: Not Supported 00:20:38.246 Supported Log Pages Log Page: May Support 00:20:38.246 Commands Supported & Effects Log Page: Not Supported 00:20:38.246 Feature Identifiers & Effects Log Page:May Support 00:20:38.246 NVMe-MI Commands & Effects Log Page: May Support 00:20:38.246 Data Area 4 for Telemetry Log: Not Supported 00:20:38.246 Error Log Page Entries Supported: 128 00:20:38.246 Keep Alive: Supported 00:20:38.246 Keep Alive Granularity: 10000 ms 00:20:38.246 00:20:38.246 NVM Command Set Attributes 00:20:38.246 ========================== 00:20:38.246 Submission Queue Entry Size 00:20:38.246 Max: 64 00:20:38.246 Min: 64 00:20:38.246 Completion Queue Entry Size 00:20:38.246 Max: 16 00:20:38.246 Min: 16 00:20:38.246 Number of Namespaces: 32 00:20:38.246 Compare Command: Supported 00:20:38.246 Write Uncorrectable Command: Not Supported 00:20:38.246 Dataset Management Command: Supported 00:20:38.246 Write Zeroes Command: Supported 00:20:38.246 Set Features Save Field: Not Supported 00:20:38.246 Reservations: Supported 00:20:38.246 Timestamp: Not Supported 00:20:38.246 Copy: Supported 00:20:38.246 Volatile Write Cache: Present 00:20:38.246 Atomic Write Unit (Normal): 1 00:20:38.246 Atomic Write Unit (PFail): 1 00:20:38.246 Atomic Compare & Write Unit: 1 00:20:38.246 Fused Compare & Write: Supported 00:20:38.246 Scatter-Gather List 00:20:38.246 SGL Command Set: Supported 00:20:38.246 SGL Keyed: Supported 00:20:38.246 SGL Bit Bucket Descriptor: Not Supported 00:20:38.246 SGL Metadata Pointer: Not Supported 00:20:38.246 Oversized SGL: Not Supported 00:20:38.246 SGL Metadata Address: Not Supported 00:20:38.246 SGL Offset: Supported 00:20:38.246 Transport SGL Data Block: Not Supported 00:20:38.246 Replay Protected Memory Block: Not Supported 00:20:38.246 00:20:38.246 Firmware Slot Information 00:20:38.246 ========================= 00:20:38.246 Active slot: 1 00:20:38.246 Slot 1 Firmware Revision: 24.09 00:20:38.246 00:20:38.246 00:20:38.246 Commands Supported and Effects 00:20:38.246 ============================== 00:20:38.246 Admin Commands 00:20:38.246 -------------- 00:20:38.246 Get Log Page (02h): Supported 00:20:38.246 Identify (06h): Supported 00:20:38.246 Abort (08h): Supported 00:20:38.246 Set Features (09h): Supported 00:20:38.246 Get Features (0Ah): Supported 00:20:38.246 Asynchronous Event Request (0Ch): Supported 00:20:38.246 Keep Alive (18h): Supported 00:20:38.246 I/O Commands 00:20:38.246 ------------ 00:20:38.246 Flush (00h): Supported LBA-Change 00:20:38.246 Write (01h): Supported LBA-Change 00:20:38.246 Read (02h): Supported 00:20:38.246 Compare (05h): Supported 00:20:38.246 Write Zeroes (08h): Supported LBA-Change 00:20:38.246 Dataset Management (09h): Supported LBA-Change 00:20:38.246 Copy (19h): Supported LBA-Change 00:20:38.246 00:20:38.246 Error Log 00:20:38.246 ========= 00:20:38.246 00:20:38.246 Arbitration 00:20:38.246 =========== 00:20:38.246 Arbitration Burst: 1 00:20:38.246 00:20:38.246 Power Management 00:20:38.246 ================ 00:20:38.246 Number of Power States: 1 00:20:38.246 Current Power State: Power State #0 00:20:38.246 Power State #0: 00:20:38.246 Max Power: 0.00 W 00:20:38.246 Non-Operational State: Operational 00:20:38.246 Entry Latency: Not Reported 00:20:38.246 Exit Latency: Not Reported 00:20:38.246 Relative Read Throughput: 0 00:20:38.246 Relative Read Latency: 0 00:20:38.246 Relative Write Throughput: 0 00:20:38.246 Relative Write Latency: 0 00:20:38.246 Idle Power: Not Reported 00:20:38.246 Active Power: Not Reported 00:20:38.246 Non-Operational Permissive Mode: Not Supported 00:20:38.246 00:20:38.246 Health Information 00:20:38.246 ================== 00:20:38.247 Critical Warnings: 00:20:38.247 Available Spare Space: OK 00:20:38.247 Temperature: OK 00:20:38.247 Device Reliability: OK 00:20:38.247 Read Only: No 00:20:38.247 Volatile Memory Backup: OK 00:20:38.247 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:38.247 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:38.247 Available Spare: 0% 00:20:38.247 Available Spare Threshold: 0% 00:20:38.247 Life Percentage Used:[2024-07-13 03:09:44.657415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.657428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.657442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.657478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:38.247 [2024-07-13 03:09:44.657592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.657614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.657623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.657630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.657704] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:38.247 [2024-07-13 03:09:44.657727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.657741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.247 [2024-07-13 03:09:44.657750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.657759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.247 [2024-07-13 03:09:44.657767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.657787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.247 [2024-07-13 03:09:44.657795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.657804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.247 [2024-07-13 03:09:44.657818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.657827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.657834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.657848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.661965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.662012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.662027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.662035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.662060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.662092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.662139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.662308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.662321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.662327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.662343] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:38.247 [2024-07-13 03:09:44.662352] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:38.247 [2024-07-13 03:09:44.662370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.662399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.662426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.662530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.662552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.662560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.662590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.662619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.662646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.662743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.662755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.662762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.662786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.662812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.662837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.662956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.247 [2024-07-13 03:09:44.662969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.247 [2024-07-13 03:09:44.662975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.662985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.247 [2024-07-13 03:09:44.663003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.663011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.247 [2024-07-13 03:09:44.663017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.247 [2024-07-13 03:09:44.663033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.247 [2024-07-13 03:09:44.663061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.247 [2024-07-13 03:09:44.663156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.663177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.663184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.663209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.663236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.663270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.663364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.663376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.663386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.663410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.663436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.663461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.663564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.663581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.663588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.663612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.663642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.663669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.663760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.663780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.663788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.663812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.663826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.663838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.663863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.663999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.664016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.664024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.664049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.664081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.664109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.664194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.664215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.664223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.664263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.664289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.664314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.664413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.664430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.664437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.664461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.664491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.664518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.664615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.664631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.664638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.664661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.664675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.664695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.664721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.664974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.665009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.665017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.665045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.665074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.665113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.665227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.665248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.665256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.665282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.665311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.665338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.665471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.665488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.665495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.665527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.665556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.665597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.665700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.665716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.665723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.665747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.665761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.248 [2024-07-13 03:09:44.665774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.248 [2024-07-13 03:09:44.665799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.248 [2024-07-13 03:09:44.669955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.248 [2024-07-13 03:09:44.669985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.248 [2024-07-13 03:09:44.669994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.670001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.248 [2024-07-13 03:09:44.670029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:38.248 [2024-07-13 03:09:44.670039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:38.249 [2024-07-13 03:09:44.670045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:38.249 [2024-07-13 03:09:44.670060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:38.249 [2024-07-13 03:09:44.670095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:38.249 [2024-07-13 03:09:44.670198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:38.249 [2024-07-13 03:09:44.670215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:38.249 [2024-07-13 03:09:44.670222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:38.249 [2024-07-13 03:09:44.670229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:38.249 [2024-07-13 03:09:44.670243] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:38.249 0% 00:20:38.249 Data Units Read: 0 00:20:38.249 Data Units Written: 0 00:20:38.249 Host Read Commands: 0 00:20:38.249 Host Write Commands: 0 00:20:38.249 Controller Busy Time: 0 minutes 00:20:38.249 Power Cycles: 0 00:20:38.249 Power On Hours: 0 hours 00:20:38.249 Unsafe Shutdowns: 0 00:20:38.249 Unrecoverable Media Errors: 0 00:20:38.249 Lifetime Error Log Entries: 0 00:20:38.249 Warning Temperature Time: 0 minutes 00:20:38.249 Critical Temperature Time: 0 minutes 00:20:38.249 00:20:38.249 Number of Queues 00:20:38.249 ================ 00:20:38.249 Number of I/O Submission Queues: 127 00:20:38.249 Number of I/O Completion Queues: 127 00:20:38.249 00:20:38.249 Active Namespaces 00:20:38.249 ================= 00:20:38.249 Namespace ID:1 00:20:38.249 Error Recovery Timeout: Unlimited 00:20:38.249 Command Set Identifier: NVM (00h) 00:20:38.249 Deallocate: Supported 00:20:38.249 Deallocated/Unwritten Error: Not Supported 00:20:38.249 Deallocated Read Value: Unknown 00:20:38.249 Deallocate in Write Zeroes: Not Supported 00:20:38.249 Deallocated Guard Field: 0xFFFF 00:20:38.249 Flush: Supported 00:20:38.249 Reservation: Supported 00:20:38.249 Namespace Sharing Capabilities: Multiple Controllers 00:20:38.249 Size (in LBAs): 131072 (0GiB) 00:20:38.249 Capacity (in LBAs): 131072 (0GiB) 00:20:38.249 Utilization (in LBAs): 131072 (0GiB) 00:20:38.249 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:38.249 EUI64: ABCDEF0123456789 00:20:38.249 UUID: ddcdb7a3-68fb-4473-ba32-23a9e16c7f12 00:20:38.249 Thin Provisioning: Not Supported 00:20:38.249 Per-NS Atomic Units: Yes 00:20:38.249 Atomic Boundary Size (Normal): 0 00:20:38.249 Atomic Boundary Size (PFail): 0 00:20:38.249 Atomic Boundary Offset: 0 00:20:38.249 Maximum Single Source Range Length: 65535 00:20:38.249 Maximum Copy Length: 65535 00:20:38.249 Maximum Source Range Count: 1 00:20:38.249 NGUID/EUI64 Never Reused: No 00:20:38.249 Namespace Write Protected: No 00:20:38.249 Number of LBA Formats: 1 00:20:38.249 Current LBA Format: LBA Format #00 00:20:38.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:38.249 00:20:38.249 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.508 rmmod nvme_tcp 00:20:38.508 rmmod nvme_fabrics 00:20:38.508 rmmod nvme_keyring 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 79875 ']' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 79875 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 79875 ']' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 79875 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79875 00:20:38.508 killing process with pid 79875 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79875' 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 79875 00:20:38.508 03:09:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 79875 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:39.916 ************************************ 00:20:39.916 END TEST nvmf_identify 00:20:39.916 ************************************ 00:20:39.916 00:20:39.916 real 0m3.624s 00:20:39.916 user 0m9.757s 00:20:39.916 sys 0m0.773s 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.916 03:09:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.916 03:09:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.916 03:09:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.916 03:09:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.916 03:09:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.916 03:09:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.916 ************************************ 00:20:39.916 START TEST nvmf_perf 00:20:39.916 ************************************ 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.916 * Looking for test storage... 00:20:39.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.916 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:39.917 Cannot find device "nvmf_tgt_br" 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.917 Cannot find device "nvmf_tgt_br2" 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:39.917 Cannot find device "nvmf_tgt_br" 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:39.917 Cannot find device "nvmf_tgt_br2" 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.917 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:40.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:20:40.177 00:20:40.177 --- 10.0.0.2 ping statistics --- 00:20:40.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.177 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:40.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:40.177 00:20:40.177 --- 10.0.0.3 ping statistics --- 00:20:40.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.177 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:40.177 00:20:40.177 --- 10.0.0.1 ping statistics --- 00:20:40.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.177 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=80097 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 80097 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 80097 ']' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.177 03:09:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.437 [2024-07-13 03:09:46.753718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:40.437 [2024-07-13 03:09:46.753904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.437 [2024-07-13 03:09:46.926336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.696 [2024-07-13 03:09:47.094090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.696 [2024-07-13 03:09:47.094166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.696 [2024-07-13 03:09:47.094182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.696 [2024-07-13 03:09:47.094194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.696 [2024-07-13 03:09:47.094206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.696 [2024-07-13 03:09:47.094385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.696 [2024-07-13 03:09:47.094621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.696 [2024-07-13 03:09:47.095234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.696 [2024-07-13 03:09:47.095270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.954 [2024-07-13 03:09:47.271413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:41.213 03:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:41.780 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:41.780 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:42.038 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:42.038 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.297 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:42.297 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:42.297 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:42.297 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:42.297 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.554 [2024-07-13 03:09:48.945763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.554 03:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.813 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:42.813 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.071 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:43.071 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:43.330 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.589 [2024-07-13 03:09:49.932007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.589 03:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.848 03:09:50 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:43.848 03:09:50 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:43.848 03:09:50 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:43.848 03:09:50 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:45.223 Initializing NVMe Controllers 00:20:45.223 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:45.223 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:45.223 Initialization complete. Launching workers. 00:20:45.223 ======================================================== 00:20:45.223 Latency(us) 00:20:45.223 Device Information : IOPS MiB/s Average min max 00:20:45.223 PCIE (0000:00:10.0) NSID 1 from core 0: 22944.00 89.62 1394.12 352.50 8192.19 00:20:45.223 ======================================================== 00:20:45.223 Total : 22944.00 89.62 1394.12 352.50 8192.19 00:20:45.223 00:20:45.223 03:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.596 Initializing NVMe Controllers 00:20:46.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:46.596 Initialization complete. Launching workers. 00:20:46.596 ======================================================== 00:20:46.596 Latency(us) 00:20:46.596 Device Information : IOPS MiB/s Average min max 00:20:46.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2415.30 9.43 410.37 149.74 8213.60 00:20:46.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.86 0.48 8202.51 6821.32 16147.11 00:20:46.596 ======================================================== 00:20:46.596 Total : 2538.16 9.91 787.56 149.74 16147.11 00:20:46.596 00:20:46.596 03:09:52 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.967 Initializing NVMe Controllers 00:20:47.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.967 Initialization complete. Launching workers. 00:20:47.967 ======================================================== 00:20:47.967 Latency(us) 00:20:47.967 Device Information : IOPS MiB/s Average min max 00:20:47.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5975.00 23.34 5359.37 1097.06 12517.13 00:20:47.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3881.00 15.16 8291.38 5835.36 19622.82 00:20:47.967 ======================================================== 00:20:47.967 Total : 9856.00 38.50 6513.91 1097.06 19622.82 00:20:47.967 00:20:48.226 03:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:48.226 03:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.753 Initializing NVMe Controllers 00:20:50.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.753 Controller IO queue size 128, less than required. 00:20:50.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.753 Controller IO queue size 128, less than required. 00:20:50.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:50.753 Initialization complete. Launching workers. 00:20:50.753 ======================================================== 00:20:50.753 Latency(us) 00:20:50.753 Device Information : IOPS MiB/s Average min max 00:20:50.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1287.15 321.79 105148.46 54562.56 305120.95 00:20:50.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 545.79 136.45 244547.33 116289.12 476503.57 00:20:50.753 ======================================================== 00:20:50.753 Total : 1832.94 458.23 146657.00 54562.56 476503.57 00:20:50.753 00:20:51.012 03:09:57 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:51.271 Initializing NVMe Controllers 00:20:51.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.271 Controller IO queue size 128, less than required. 00:20:51.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.271 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:51.271 Controller IO queue size 128, less than required. 00:20:51.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.271 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:51.271 WARNING: Some requested NVMe devices were skipped 00:20:51.271 No valid NVMe controllers or AIO or URING devices found 00:20:51.271 03:09:57 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:54.557 Initializing NVMe Controllers 00:20:54.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.557 Controller IO queue size 128, less than required. 00:20:54.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.557 Controller IO queue size 128, less than required. 00:20:54.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:54.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:54.557 Initialization complete. Launching workers. 00:20:54.557 00:20:54.557 ==================== 00:20:54.557 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:54.557 TCP transport: 00:20:54.557 polls: 7573 00:20:54.557 idle_polls: 4480 00:20:54.557 sock_completions: 3093 00:20:54.557 nvme_completions: 5403 00:20:54.557 submitted_requests: 8074 00:20:54.557 queued_requests: 1 00:20:54.557 00:20:54.557 ==================== 00:20:54.557 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:54.557 TCP transport: 00:20:54.557 polls: 7912 00:20:54.557 idle_polls: 4685 00:20:54.557 sock_completions: 3227 00:20:54.557 nvme_completions: 5699 00:20:54.557 submitted_requests: 8554 00:20:54.557 queued_requests: 1 00:20:54.557 ======================================================== 00:20:54.557 Latency(us) 00:20:54.557 Device Information : IOPS MiB/s Average min max 00:20:54.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1350.46 337.61 97397.21 48763.41 234180.00 00:20:54.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1424.45 356.11 92998.85 48086.53 352460.60 00:20:54.557 ======================================================== 00:20:54.557 Total : 2774.91 693.73 95139.39 48086.53 352460.60 00:20:54.557 00:20:54.557 03:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:54.557 03:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.557 03:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:54.557 03:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:54.557 03:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:54.816 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:55.091 { 00:20:55.091 "uuid": "fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac", 00:20:55.091 "name": "lvs_0", 00:20:55.091 "base_bdev": "Nvme0n1", 00:20:55.091 "total_data_clusters": 1278, 00:20:55.091 "free_clusters": 1278, 00:20:55.091 "block_size": 4096, 00:20:55.091 "cluster_size": 4194304 00:20:55.091 } 00:20:55.091 ]' 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac") .free_clusters' 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac") .cluster_size' 00:20:55.091 5112 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:55.091 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac lbd_0 5112 00:20:55.357 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=65fd731c-e901-4006-a9fc-be1863755747 00:20:55.357 03:10:01 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 65fd731c-e901-4006-a9fc-be1863755747 lvs_n_0 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c94c3c68-b3e6-4d72-a58e-6e25549e3c41 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c94c3c68-b3e6-4d72-a58e-6e25549e3c41 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c94c3c68-b3e6-4d72-a58e-6e25549e3c41 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:55.615 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:56.181 { 00:20:56.181 "uuid": "fca00fb9-aea7-4cfc-8ac7-2d8fb88e30ac", 00:20:56.181 "name": "lvs_0", 00:20:56.181 "base_bdev": "Nvme0n1", 00:20:56.181 "total_data_clusters": 1278, 00:20:56.181 "free_clusters": 0, 00:20:56.181 "block_size": 4096, 00:20:56.181 "cluster_size": 4194304 00:20:56.181 }, 00:20:56.181 { 00:20:56.181 "uuid": "c94c3c68-b3e6-4d72-a58e-6e25549e3c41", 00:20:56.181 "name": "lvs_n_0", 00:20:56.181 "base_bdev": "65fd731c-e901-4006-a9fc-be1863755747", 00:20:56.181 "total_data_clusters": 1276, 00:20:56.181 "free_clusters": 1276, 00:20:56.181 "block_size": 4096, 00:20:56.181 "cluster_size": 4194304 00:20:56.181 } 00:20:56.181 ]' 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c94c3c68-b3e6-4d72-a58e-6e25549e3c41") .free_clusters' 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c94c3c68-b3e6-4d72-a58e-6e25549e3c41") .cluster_size' 00:20:56.181 5104 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:56.181 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c94c3c68-b3e6-4d72-a58e-6e25549e3c41 lbd_nest_0 5104 00:20:56.439 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=95fc958b-dda2-46b3-bcab-848ede2ca5f1 00:20:56.439 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.697 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:56.697 03:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 95fc958b-dda2-46b3-bcab-848ede2ca5f1 00:20:56.955 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.214 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:57.214 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:57.214 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:57.214 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:57.214 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.472 Initializing NVMe Controllers 00:20:57.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.472 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:57.472 WARNING: Some requested NVMe devices were skipped 00:20:57.472 No valid NVMe controllers or AIO or URING devices found 00:20:57.472 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:57.472 03:10:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:09.679 Initializing NVMe Controllers 00:21:09.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.679 Initialization complete. Launching workers. 00:21:09.679 ======================================================== 00:21:09.679 Latency(us) 00:21:09.679 Device Information : IOPS MiB/s Average min max 00:21:09.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 815.55 101.94 1225.41 403.71 9073.14 00:21:09.679 ======================================================== 00:21:09.679 Total : 815.55 101.94 1225.41 403.71 9073.14 00:21:09.679 00:21:09.679 03:10:14 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:09.679 03:10:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:09.679 03:10:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:09.679 Initializing NVMe Controllers 00:21:09.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.679 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:09.679 WARNING: Some requested NVMe devices were skipped 00:21:09.679 No valid NVMe controllers or AIO or URING devices found 00:21:09.679 03:10:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:09.679 03:10:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.657 Initializing NVMe Controllers 00:21:19.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.657 Initialization complete. Launching workers. 00:21:19.657 ======================================================== 00:21:19.657 Latency(us) 00:21:19.657 Device Information : IOPS MiB/s Average min max 00:21:19.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1315.30 164.41 24354.19 5398.39 63689.54 00:21:19.657 ======================================================== 00:21:19.657 Total : 1315.30 164.41 24354.19 5398.39 63689.54 00:21:19.657 00:21:19.657 03:10:25 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:19.657 03:10:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.657 03:10:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.657 Initializing NVMe Controllers 00:21:19.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.657 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:19.657 WARNING: Some requested NVMe devices were skipped 00:21:19.657 No valid NVMe controllers or AIO or URING devices found 00:21:19.657 03:10:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.657 03:10:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.631 Initializing NVMe Controllers 00:21:29.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.631 Controller IO queue size 128, less than required. 00:21:29.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:29.631 Initialization complete. Launching workers. 00:21:29.631 ======================================================== 00:21:29.631 Latency(us) 00:21:29.631 Device Information : IOPS MiB/s Average min max 00:21:29.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3446.92 430.87 37193.77 14939.50 94251.58 00:21:29.631 ======================================================== 00:21:29.631 Total : 3446.92 430.87 37193.77 14939.50 94251.58 00:21:29.631 00:21:29.631 03:10:36 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.888 03:10:36 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 95fc958b-dda2-46b3-bcab-848ede2ca5f1 00:21:30.453 03:10:36 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:30.711 03:10:36 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 65fd731c-e901-4006-a9fc-be1863755747 00:21:30.968 03:10:37 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.241 rmmod nvme_tcp 00:21:31.241 rmmod nvme_fabrics 00:21:31.241 rmmod nvme_keyring 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 80097 ']' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 80097 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 80097 ']' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 80097 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80097 00:21:31.241 killing process with pid 80097 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80097' 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 80097 00:21:31.241 03:10:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 80097 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.771 03:10:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.771 03:10:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:33.771 ************************************ 00:21:33.771 END TEST nvmf_perf 00:21:33.771 ************************************ 00:21:33.771 00:21:33.771 real 0m53.865s 00:21:33.771 user 3m22.702s 00:21:33.771 sys 0m12.597s 00:21:33.771 03:10:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.771 03:10:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.771 03:10:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:33.771 03:10:40 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.771 03:10:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:33.771 03:10:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.771 03:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.771 ************************************ 00:21:33.771 START TEST nvmf_fio_host 00:21:33.771 ************************************ 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.771 * Looking for test storage... 00:21:33.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.771 03:10:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:33.772 Cannot find device "nvmf_tgt_br" 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.772 Cannot find device "nvmf_tgt_br2" 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:33.772 Cannot find device "nvmf_tgt_br" 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:33.772 Cannot find device "nvmf_tgt_br2" 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:21:33.772 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:34.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:34.031 00:21:34.031 --- 10.0.0.2 ping statistics --- 00:21:34.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.031 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:34.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:34.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:34.031 00:21:34.031 --- 10.0.0.3 ping statistics --- 00:21:34.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.031 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:34.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:34.031 00:21:34.031 --- 10.0.0.1 ping statistics --- 00:21:34.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.031 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.031 03:10:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80936 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80936 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 80936 ']' 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.290 03:10:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.290 [2024-07-13 03:10:40.660786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:34.290 [2024-07-13 03:10:40.661053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.549 [2024-07-13 03:10:40.846673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.808 [2024-07-13 03:10:41.099855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.808 [2024-07-13 03:10:41.099971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.808 [2024-07-13 03:10:41.099994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.808 [2024-07-13 03:10:41.100012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.808 [2024-07-13 03:10:41.100029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.808 [2024-07-13 03:10:41.100236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.808 [2024-07-13 03:10:41.101215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.808 [2024-07-13 03:10:41.101355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.808 [2024-07-13 03:10:41.101359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.067 [2024-07-13 03:10:41.314037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:35.326 03:10:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.326 03:10:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:35.326 03:10:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.326 [2024-07-13 03:10:41.817835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.585 03:10:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:35.585 03:10:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.585 03:10:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.585 03:10:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:35.844 Malloc1 00:21:35.844 03:10:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.102 03:10:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:36.361 03:10:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.618 [2024-07-13 03:10:43.011064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.618 03:10:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:36.877 03:10:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:37.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:37.135 fio-3.35 00:21:37.135 Starting 1 thread 00:21:39.716 00:21:39.716 test: (groupid=0, jobs=1): err= 0: pid=81006: Sat Jul 13 03:10:45 2024 00:21:39.716 read: IOPS=6881, BW=26.9MiB/s (28.2MB/s)(54.0MiB/2009msec) 00:21:39.716 slat (usec): min=2, max=262, avg= 3.53, stdev= 3.02 00:21:39.716 clat (usec): min=2005, max=18098, avg=9639.10, stdev=763.43 00:21:39.716 lat (usec): min=2037, max=18101, avg=9642.63, stdev=763.19 00:21:39.716 clat percentiles (usec): 00:21:39.716 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:21:39.716 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:21:39.716 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:21:39.716 | 99.00th=[11600], 99.50th=[12256], 99.90th=[16319], 99.95th=[17695], 00:21:39.716 | 99.99th=[17957] 00:21:39.716 bw ( KiB/s): min=26240, max=28352, per=99.99%, avg=27524.00, stdev=936.82, samples=4 00:21:39.716 iops : min= 6560, max= 7088, avg=6881.00, stdev=234.21, samples=4 00:21:39.716 write: IOPS=6889, BW=26.9MiB/s (28.2MB/s)(54.1MiB/2009msec); 0 zone resets 00:21:39.716 slat (usec): min=2, max=157, avg= 3.65, stdev= 2.11 00:21:39.716 clat (usec): min=1733, max=17545, avg=8821.91, stdev=694.72 00:21:39.716 lat (usec): min=1743, max=17548, avg=8825.56, stdev=694.61 00:21:39.716 clat percentiles (usec): 00:21:39.716 | 1.00th=[ 7570], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8356], 00:21:39.716 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:39.716 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:21:39.716 | 99.00th=[10552], 99.50th=[11469], 99.90th=[15139], 99.95th=[16057], 00:21:39.716 | 99.99th=[17433] 00:21:39.716 bw ( KiB/s): min=27336, max=27824, per=99.99%, avg=27554.00, stdev=201.58, samples=4 00:21:39.716 iops : min= 6834, max= 6956, avg=6888.50, stdev=50.40, samples=4 00:21:39.716 lat (msec) : 2=0.01%, 4=0.12%, 10=84.98%, 20=14.90% 00:21:39.716 cpu : usr=68.48%, sys=23.21%, ctx=9, majf=0, minf=1539 00:21:39.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:39.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.716 issued rwts: total=13825,13841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.716 00:21:39.716 Run status group 0 (all jobs): 00:21:39.716 READ: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=54.0MiB (56.6MB), run=2009-2009msec 00:21:39.716 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=54.1MiB (56.7MB), run=2009-2009msec 00:21:39.716 ----------------------------------------------------- 00:21:39.716 Suppressions used: 00:21:39.716 count bytes template 00:21:39.716 1 57 /usr/src/fio/parse.c 00:21:39.716 1 8 libtcmalloc_minimal.so 00:21:39.716 ----------------------------------------------------- 00:21:39.716 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:39.716 03:10:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:39.716 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:39.716 fio-3.35 00:21:39.716 Starting 1 thread 00:21:42.255 00:21:42.255 test: (groupid=0, jobs=1): err= 0: pid=81053: Sat Jul 13 03:10:48 2024 00:21:42.255 read: IOPS=6597, BW=103MiB/s (108MB/s)(207MiB/2010msec) 00:21:42.255 slat (usec): min=4, max=138, avg= 5.28, stdev= 2.71 00:21:42.255 clat (usec): min=3026, max=22017, avg=10847.94, stdev=3199.22 00:21:42.255 lat (usec): min=3031, max=22021, avg=10853.23, stdev=3199.29 00:21:42.255 clat percentiles (usec): 00:21:42.255 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7963], 00:21:42.255 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11338], 00:21:42.255 | 70.00th=[12256], 80.00th=[13304], 90.00th=[15008], 95.00th=[16909], 00:21:42.255 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21365], 99.95th=[21627], 00:21:42.255 | 99.99th=[21890] 00:21:42.255 bw ( KiB/s): min=43936, max=62464, per=49.39%, avg=52136.00, stdev=8130.66, samples=4 00:21:42.255 iops : min= 2746, max= 3904, avg=3258.50, stdev=508.17, samples=4 00:21:42.255 write: IOPS=3788, BW=59.2MiB/s (62.1MB/s)(107MiB/1804msec); 0 zone resets 00:21:42.255 slat (usec): min=38, max=225, avg=44.15, stdev= 8.46 00:21:42.255 clat (usec): min=8203, max=26495, avg=15523.60, stdev=2607.51 00:21:42.255 lat (usec): min=8243, max=26536, avg=15567.74, stdev=2608.15 00:21:42.255 clat percentiles (usec): 00:21:42.255 | 1.00th=[10421], 5.00th=[11731], 10.00th=[12518], 20.00th=[13304], 00:21:42.255 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15270], 60.00th=[15926], 00:21:42.255 | 70.00th=[16581], 80.00th=[17433], 90.00th=[19006], 95.00th=[20055], 00:21:42.255 | 99.00th=[23200], 99.50th=[24249], 99.90th=[26084], 99.95th=[26346], 00:21:42.255 | 99.99th=[26608] 00:21:42.255 bw ( KiB/s): min=45536, max=64992, per=89.50%, avg=54256.00, stdev=8410.44, samples=4 00:21:42.255 iops : min= 2846, max= 4062, avg=3391.00, stdev=525.65, samples=4 00:21:42.255 lat (msec) : 4=0.06%, 10=28.98%, 20=68.60%, 50=2.35% 00:21:42.255 cpu : usr=79.60%, sys=15.17%, ctx=51, majf=0, minf=2106 00:21:42.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:42.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:42.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:42.255 issued rwts: total=13260,6835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:42.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:42.255 00:21:42.255 Run status group 0 (all jobs): 00:21:42.255 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=207MiB (217MB), run=2010-2010msec 00:21:42.255 WRITE: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=107MiB (112MB), run=1804-1804msec 00:21:42.514 ----------------------------------------------------- 00:21:42.514 Suppressions used: 00:21:42.514 count bytes template 00:21:42.514 1 57 /usr/src/fio/parse.c 00:21:42.514 119 11424 /usr/src/fio/iolog.c 00:21:42.514 1 8 libtcmalloc_minimal.so 00:21:42.514 ----------------------------------------------------- 00:21:42.514 00:21:42.514 03:10:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:42.773 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:21:43.032 Nvme0n1 00:21:43.032 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=5820bd3b-f94b-4b42-8e95-84f25fb8ceec 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 5820bd3b-f94b-4b42-8e95-84f25fb8ceec 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5820bd3b-f94b-4b42-8e95-84f25fb8ceec 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:43.290 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:43.548 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:43.549 { 00:21:43.549 "uuid": "5820bd3b-f94b-4b42-8e95-84f25fb8ceec", 00:21:43.549 "name": "lvs_0", 00:21:43.549 "base_bdev": "Nvme0n1", 00:21:43.549 "total_data_clusters": 4, 00:21:43.549 "free_clusters": 4, 00:21:43.549 "block_size": 4096, 00:21:43.549 "cluster_size": 1073741824 00:21:43.549 } 00:21:43.549 ]' 00:21:43.549 03:10:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5820bd3b-f94b-4b42-8e95-84f25fb8ceec") .free_clusters' 00:21:43.549 03:10:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:21:43.549 03:10:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5820bd3b-f94b-4b42-8e95-84f25fb8ceec") .cluster_size' 00:21:43.806 4096 00:21:43.806 03:10:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:21:43.806 03:10:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:21:43.806 03:10:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:21:43.806 03:10:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:44.064 f389e820-25d3-41a2-b3bd-0b63c2cde74e 00:21:44.064 03:10:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:44.323 03:10:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:44.582 03:10:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:44.841 03:10:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:44.841 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:44.841 fio-3.35 00:21:44.841 Starting 1 thread 00:21:47.370 00:21:47.370 test: (groupid=0, jobs=1): err= 0: pid=81157: Sat Jul 13 03:10:53 2024 00:21:47.370 read: IOPS=5124, BW=20.0MiB/s (21.0MB/s)(40.2MiB/2010msec) 00:21:47.370 slat (usec): min=2, max=248, avg= 3.25, stdev= 3.16 00:21:47.370 clat (usec): min=3707, max=21068, avg=13006.74, stdev=1072.65 00:21:47.370 lat (usec): min=3722, max=21071, avg=13009.99, stdev=1072.26 00:21:47.370 clat percentiles (usec): 00:21:47.370 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:21:47.370 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:21:47.370 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:21:47.370 | 99.00th=[15401], 99.50th=[15926], 99.90th=[19006], 99.95th=[20841], 00:21:47.370 | 99.99th=[21103] 00:21:47.370 bw ( KiB/s): min=19369, max=20920, per=99.92%, avg=20484.25, stdev=745.13, samples=4 00:21:47.370 iops : min= 4842, max= 5230, avg=5121.00, stdev=186.41, samples=4 00:21:47.370 write: IOPS=5119, BW=20.0MiB/s (21.0MB/s)(40.2MiB/2010msec); 0 zone resets 00:21:47.370 slat (usec): min=2, max=167, avg= 3.50, stdev= 2.36 00:21:47.370 clat (usec): min=2576, max=19192, avg=11810.21, stdev=1021.70 00:21:47.370 lat (usec): min=2601, max=19196, avg=11813.71, stdev=1021.42 00:21:47.370 clat percentiles (usec): 00:21:47.370 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:21:47.370 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:21:47.370 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:21:47.370 | 99.00th=[14091], 99.50th=[14484], 99.90th=[18482], 99.95th=[19006], 00:21:47.370 | 99.99th=[19268] 00:21:47.370 bw ( KiB/s): min=20160, max=20736, per=99.73%, avg=20423.75, stdev=264.58, samples=4 00:21:47.370 iops : min= 5040, max= 5184, avg=5105.75, stdev=66.30, samples=4 00:21:47.370 lat (msec) : 4=0.04%, 10=1.29%, 20=98.63%, 50=0.04% 00:21:47.370 cpu : usr=75.11%, sys=19.01%, ctx=14, majf=0, minf=1539 00:21:47.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:47.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:47.370 issued rwts: total=10301,10290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:47.370 00:21:47.370 Run status group 0 (all jobs): 00:21:47.370 READ: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=40.2MiB (42.2MB), run=2010-2010msec 00:21:47.370 WRITE: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=40.2MiB (42.1MB), run=2010-2010msec 00:21:47.628 ----------------------------------------------------- 00:21:47.628 Suppressions used: 00:21:47.628 count bytes template 00:21:47.628 1 58 /usr/src/fio/parse.c 00:21:47.628 1 8 libtcmalloc_minimal.so 00:21:47.628 ----------------------------------------------------- 00:21:47.628 00:21:47.628 03:10:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:47.886 03:10:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:47.886 03:10:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=af21b416-9f2b-477b-a8a8-029c9183178b 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb af21b416-9f2b-477b-a8a8-029c9183178b 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=af21b416-9f2b-477b-a8a8-029c9183178b 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:48.145 { 00:21:48.145 "uuid": "5820bd3b-f94b-4b42-8e95-84f25fb8ceec", 00:21:48.145 "name": "lvs_0", 00:21:48.145 "base_bdev": "Nvme0n1", 00:21:48.145 "total_data_clusters": 4, 00:21:48.145 "free_clusters": 0, 00:21:48.145 "block_size": 4096, 00:21:48.145 "cluster_size": 1073741824 00:21:48.145 }, 00:21:48.145 { 00:21:48.145 "uuid": "af21b416-9f2b-477b-a8a8-029c9183178b", 00:21:48.145 "name": "lvs_n_0", 00:21:48.145 "base_bdev": "f389e820-25d3-41a2-b3bd-0b63c2cde74e", 00:21:48.145 "total_data_clusters": 1022, 00:21:48.145 "free_clusters": 1022, 00:21:48.145 "block_size": 4096, 00:21:48.145 "cluster_size": 4194304 00:21:48.145 } 00:21:48.145 ]' 00:21:48.145 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="af21b416-9f2b-477b-a8a8-029c9183178b") .free_clusters' 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="af21b416-9f2b-477b-a8a8-029c9183178b") .cluster_size' 00:21:48.404 4088 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:21:48.404 03:10:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:48.662 a505f795-4622-451d-99a1-cd0cfd84ec1f 00:21:48.662 03:10:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:48.921 03:10:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:49.256 03:10:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:49.534 03:10:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:49.534 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:49.534 fio-3.35 00:21:49.534 Starting 1 thread 00:21:52.065 00:21:52.065 test: (groupid=0, jobs=1): err= 0: pid=81230: Sat Jul 13 03:10:58 2024 00:21:52.065 read: IOPS=4558, BW=17.8MiB/s (18.7MB/s)(35.8MiB/2011msec) 00:21:52.065 slat (usec): min=2, max=207, avg= 3.42, stdev= 3.03 00:21:52.065 clat (usec): min=3735, max=26223, avg=14646.07, stdev=1305.99 00:21:52.065 lat (usec): min=3741, max=26226, avg=14649.49, stdev=1305.74 00:21:52.065 clat percentiles (usec): 00:21:52.065 | 1.00th=[11994], 5.00th=[12911], 10.00th=[13173], 20.00th=[13698], 00:21:52.065 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:21:52.065 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:21:52.065 | 99.00th=[17433], 99.50th=[18482], 99.90th=[24773], 99.95th=[25822], 00:21:52.065 | 99.99th=[26346] 00:21:52.065 bw ( KiB/s): min=17336, max=18592, per=99.75%, avg=18188.00, stdev=574.54, samples=4 00:21:52.065 iops : min= 4334, max= 4648, avg=4547.00, stdev=143.63, samples=4 00:21:52.065 write: IOPS=4558, BW=17.8MiB/s (18.7MB/s)(35.8MiB/2011msec); 0 zone resets 00:21:52.065 slat (usec): min=2, max=152, avg= 3.61, stdev= 2.01 00:21:52.065 clat (usec): min=2447, max=24876, avg=13260.59, stdev=1220.06 00:21:52.065 lat (usec): min=2458, max=24879, avg=13264.21, stdev=1219.94 00:21:52.065 clat percentiles (usec): 00:21:52.065 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:21:52.065 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:21:52.065 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:21:52.065 | 99.00th=[15926], 99.50th=[16450], 99.90th=[23200], 99.95th=[23725], 00:21:52.065 | 99.99th=[24773] 00:21:52.065 bw ( KiB/s): min=18176, max=18312, per=99.96%, avg=18226.00, stdev=64.79, samples=4 00:21:52.065 iops : min= 4544, max= 4578, avg=4556.50, stdev=16.20, samples=4 00:21:52.065 lat (msec) : 4=0.04%, 10=0.36%, 20=99.31%, 50=0.28% 00:21:52.065 cpu : usr=74.38%, sys=20.10%, ctx=7, majf=0, minf=1538 00:21:52.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:52.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:52.065 issued rwts: total=9167,9167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:52.065 00:21:52.065 Run status group 0 (all jobs): 00:21:52.065 READ: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=35.8MiB (37.5MB), run=2011-2011msec 00:21:52.065 WRITE: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=35.8MiB (37.5MB), run=2011-2011msec 00:21:52.065 ----------------------------------------------------- 00:21:52.065 Suppressions used: 00:21:52.065 count bytes template 00:21:52.065 1 58 /usr/src/fio/parse.c 00:21:52.065 1 8 libtcmalloc_minimal.so 00:21:52.065 ----------------------------------------------------- 00:21:52.065 00:21:52.323 03:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:52.580 03:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:52.580 03:10:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:52.838 03:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:53.096 03:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:53.355 03:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:53.614 03:10:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.181 rmmod nvme_tcp 00:21:54.181 rmmod nvme_fabrics 00:21:54.181 rmmod nvme_keyring 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 80936 ']' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 80936 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 80936 ']' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 80936 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80936 00:21:54.181 killing process with pid 80936 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80936' 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 80936 00:21:54.181 03:11:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 80936 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.556 03:11:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:55.556 00:21:55.556 real 0m21.944s 00:21:55.556 user 1m34.629s 00:21:55.556 sys 0m4.736s 00:21:55.556 03:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.556 03:11:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.556 ************************************ 00:21:55.556 END TEST nvmf_fio_host 00:21:55.556 ************************************ 00:21:55.556 03:11:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:55.556 03:11:02 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:55.556 03:11:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.814 03:11:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.814 03:11:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.814 ************************************ 00:21:55.814 START TEST nvmf_failover 00:21:55.814 ************************************ 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:55.814 * Looking for test storage... 00:21:55.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.814 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:55.815 Cannot find device "nvmf_tgt_br" 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.815 Cannot find device "nvmf_tgt_br2" 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:55.815 Cannot find device "nvmf_tgt_br" 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:55.815 Cannot find device "nvmf_tgt_br2" 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:55.815 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:56.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:56.073 00:21:56.073 --- 10.0.0.2 ping statistics --- 00:21:56.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.073 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:56.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:21:56.073 00:21:56.073 --- 10.0.0.3 ping statistics --- 00:21:56.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.073 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:56.073 00:21:56.073 --- 10.0.0.1 ping statistics --- 00:21:56.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.073 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.073 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81490 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81490 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81490 ']' 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.074 03:11:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:56.332 [2024-07-13 03:11:02.656300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:56.332 [2024-07-13 03:11:02.656492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.589 [2024-07-13 03:11:02.833357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:56.589 [2024-07-13 03:11:03.061437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.589 [2024-07-13 03:11:03.061531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.589 [2024-07-13 03:11:03.061579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.589 [2024-07-13 03:11:03.061593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.589 [2024-07-13 03:11:03.061618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.589 [2024-07-13 03:11:03.061847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.589 [2024-07-13 03:11:03.062503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.589 [2024-07-13 03:11:03.062526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.847 [2024-07-13 03:11:03.272600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.414 03:11:03 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.414 [2024-07-13 03:11:03.905541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.674 03:11:03 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:57.931 Malloc0 00:21:57.931 03:11:04 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.189 03:11:04 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.448 03:11:04 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.448 [2024-07-13 03:11:04.887814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.448 03:11:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.706 [2024-07-13 03:11:05.096053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.706 03:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:58.965 [2024-07-13 03:11:05.308231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81542 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81542 /var/tmp/bdevperf.sock 00:21:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81542 ']' 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.965 03:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:59.899 03:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.899 03:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:59.899 03:11:06 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.466 NVMe0n1 00:22:00.466 03:11:06 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.724 00:22:00.724 03:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81571 00:22:00.724 03:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.724 03:11:07 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:01.659 03:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.918 03:11:08 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:05.202 03:11:11 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:05.202 00:22:05.202 03:11:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:05.460 03:11:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:08.741 03:11:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.741 [2024-07-13 03:11:15.189570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.741 03:11:15 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:10.210 03:11:16 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:10.210 03:11:16 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 81571 00:22:16.767 0 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 81542 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81542 ']' 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81542 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81542 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.767 killing process with pid 81542 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81542' 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81542 00:22:16.767 03:11:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81542 00:22:17.032 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:17.032 [2024-07-13 03:11:05.409612] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:17.032 [2024-07-13 03:11:05.409769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81542 ] 00:22:17.032 [2024-07-13 03:11:05.569015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.032 [2024-07-13 03:11:05.786652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.032 [2024-07-13 03:11:05.956532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:17.032 Running I/O for 15 seconds... 00:22:17.032 [2024-07-13 03:11:08.285013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.032 [2024-07-13 03:11:08.285559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.285717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.032 [2024-07-13 03:11:08.285883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.285974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.032 [2024-07-13 03:11:08.286118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.286217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.032 [2024-07-13 03:11:08.286309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.286395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:22:17.032 [2024-07-13 03:11:08.286817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.032 [2024-07-13 03:11:08.286969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.287102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.287207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.287299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.287409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.287499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.287599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.287702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.287797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.287884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.288054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.288172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.288281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.288385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.288480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.288584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.288688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.288777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.288965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.289083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.289187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.289278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.289391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.289529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.289682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.289779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.289903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.290010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.290208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.290313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.290403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.290501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.290592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.290695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.290785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.290904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.291029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.291131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.291221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.291320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.291454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.291556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.291644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.032 [2024-07-13 03:11:08.291753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.032 [2024-07-13 03:11:08.291872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.291980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.292110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.292207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.292303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.292403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.292493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.292595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.292675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.292780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.292861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.293008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.293107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.293301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.293405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.293499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.293620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.293712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.293818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.294034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.294126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.294228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.294461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.294557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.294671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.294791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.294934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.295014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.295140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.295233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.295334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.295429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.295543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.295636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.295770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.295891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.296038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.296144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.296251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.296357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.296459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.296651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.296742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.296842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.296952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.297970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.297992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.298014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.298036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.298057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.298078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.298100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.033 [2024-07-13 03:11:08.298121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.033 [2024-07-13 03:11:08.298143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.298960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.298982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.299003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.299025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.301402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.301503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.301638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.301734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.301847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.301957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.302076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.302168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.302268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.302359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.302460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.302551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.302651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.302741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.302837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.302956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.303245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.303337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.303441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.303531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.303741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.303846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.303954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.304066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.304148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.304281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.304522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.304613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.304746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.304835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.034 [2024-07-13 03:11:08.304989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.034 [2024-07-13 03:11:08.305095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.305200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.305296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.305461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.305568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.305695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.305817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.305917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.306007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.306134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.306241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.306394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.306517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.306617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.306754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.306874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.306972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.307109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.307208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.307354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.307489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.307580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.307704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.307848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.307944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.308045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.308266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.308462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.308639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.308820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.308970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.308996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:08.309453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.035 [2024-07-13 03:11:08.309505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:22:17.035 [2024-07-13 03:11:08.309546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.035 [2024-07-13 03:11:08.309562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.035 [2024-07-13 03:11:08.309577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:22:17.035 [2024-07-13 03:11:08.309610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:08.309870] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:22:17.035 [2024-07-13 03:11:08.309912] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:17.035 [2024-07-13 03:11:08.309947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.035 [2024-07-13 03:11:08.310044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:17.035 [2024-07-13 03:11:08.314437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.035 [2024-07-13 03:11:08.361791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.035 [2024-07-13 03:11:11.902313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:11.902413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:11.902536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:11.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:11.902606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:11.902627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:11.902648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:11.902667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:11.902687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.035 [2024-07-13 03:11:11.902706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.035 [2024-07-13 03:11:11.902726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.036 [2024-07-13 03:11:11.902745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.036 [2024-07-13 03:11:11.902793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.036 [2024-07-13 03:11:11.902832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.036 [2024-07-13 03:11:11.902870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.036 [2024-07-13 03:11:11.902909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.902966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.902991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.036 [2024-07-13 03:11:11.903922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.036 [2024-07-13 03:11:11.903970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.903990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.904593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.904965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.904984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.905023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.905062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.037 [2024-07-13 03:11:11.905100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.905138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.905186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.037 [2024-07-13 03:11:11.905306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.037 [2024-07-13 03:11:11.905330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.905351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.905395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.905977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.905999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.906337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.906979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.906998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.907036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.038 [2024-07-13 03:11:11.907076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.907115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.907154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.907193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.038 [2024-07-13 03:11:11.907212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.038 [2024-07-13 03:11:11.907231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:11.907673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.907937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.039 [2024-07-13 03:11:11.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.039 [2024-07-13 03:11:11.908048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.039 [2024-07-13 03:11:11.908067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121536 len:8 PRP1 0x0 PRP2 0x0 00:22:17.039 [2024-07-13 03:11:11.908088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908346] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:22:17.039 [2024-07-13 03:11:11.908372] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:17.039 [2024-07-13 03:11:11.908442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:11.908471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:11.908509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:11.908545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:11.908581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:11.908599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.039 [2024-07-13 03:11:11.908670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:17.039 [2024-07-13 03:11:11.912718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.039 [2024-07-13 03:11:11.962596] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.039 [2024-07-13 03:11:16.470330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:16.470834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.470994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:16.471097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.471178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:16.471270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.471361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.039 [2024-07-13 03:11:16.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.471612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:22:17.039 [2024-07-13 03:11:16.472614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.472760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.472896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.473040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.473149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.473239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.473330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.473407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.473484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.473606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.473694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.473824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.473919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.474053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.474155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.474233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.474310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.474403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.474493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.474591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.039 [2024-07-13 03:11:16.474681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.039 [2024-07-13 03:11:16.474832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.474949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.475083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.475197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.475299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.475390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.475488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.475568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.475645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.475733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.475829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.475951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.476133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.476314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.476470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.476824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.476966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.477164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.477318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.477499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.477691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.477860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.477961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.478070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.478168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.478262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.478341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.478426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.478514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.478603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.478691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.478873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.479949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.479985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.480018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.480089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.480155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.480222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.040 [2024-07-13 03:11:16.480285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.480324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.480364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.480419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.480458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.040 [2024-07-13 03:11:16.480501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.040 [2024-07-13 03:11:16.480521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.041 [2024-07-13 03:11:16.480977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.480998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.041 [2024-07-13 03:11:16.481575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.041 [2024-07-13 03:11:16.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.481942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.481971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.042 [2024-07-13 03:11:16.482699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.482979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.042 [2024-07-13 03:11:16.482998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:22:17.042 [2024-07-13 03:11:16.483041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.042 [2024-07-13 03:11:16.483074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115424 len:8 PRP1 0x0 PRP2 0x0 00:22:17.042 [2024-07-13 03:11:16.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.042 [2024-07-13 03:11:16.483152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115944 len:8 PRP1 0x0 PRP2 0x0 00:22:17.042 [2024-07-13 03:11:16.483175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.042 [2024-07-13 03:11:16.483227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115952 len:8 PRP1 0x0 PRP2 0x0 00:22:17.042 [2024-07-13 03:11:16.483245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.042 [2024-07-13 03:11:16.483290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115960 len:8 PRP1 0x0 PRP2 0x0 00:22:17.042 [2024-07-13 03:11:16.483308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.042 [2024-07-13 03:11:16.483352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115968 len:8 PRP1 0x0 PRP2 0x0 00:22:17.042 [2024-07-13 03:11:16.483370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.042 [2024-07-13 03:11:16.483386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.042 [2024-07-13 03:11:16.483400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115976 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115984 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115992 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116000 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116008 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116016 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116024 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.483866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.483912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.483942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116032 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.483975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.484010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.484037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.484066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116040 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.484097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.484131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.484148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.484163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116048 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.484198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.484212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.484226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116056 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.484243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.484260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.043 [2024-07-13 03:11:16.484287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.043 [2024-07-13 03:11:16.484303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116064 len:8 PRP1 0x0 PRP2 0x0 00:22:17.043 [2024-07-13 03:11:16.484321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.043 [2024-07-13 03:11:16.484582] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:22:17.043 [2024-07-13 03:11:16.484608] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:17.043 [2024-07-13 03:11:16.484629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.043 [2024-07-13 03:11:16.484700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:17.043 [2024-07-13 03:11:16.488814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.043 [2024-07-13 03:11:16.541724] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.043 00:22:17.043 Latency(us) 00:22:17.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.043 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:17.043 Verification LBA range: start 0x0 length 0x4000 00:22:17.043 NVMe0n1 : 15.01 6611.72 25.83 260.35 0.00 18586.64 826.65 32887.16 00:22:17.043 =================================================================================================================== 00:22:17.043 Total : 6611.72 25.83 260.35 0.00 18586.64 826.65 32887.16 00:22:17.043 Received shutdown signal, test time was about 15.000000 seconds 00:22:17.043 00:22:17.043 Latency(us) 00:22:17.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.043 =================================================================================================================== 00:22:17.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81750 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81750 /var/tmp/bdevperf.sock 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81750 ']' 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.043 03:11:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 03:11:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.977 03:11:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:17.977 03:11:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.235 [2024-07-13 03:11:24.675827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.235 03:11:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:18.493 [2024-07-13 03:11:24.911955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:18.493 03:11:24 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.057 NVMe0n1 00:22:19.057 03:11:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.316 00:22:19.316 03:11:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.575 00:22:19.575 03:11:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.575 03:11:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:19.834 03:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.093 03:11:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:23.380 03:11:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.380 03:11:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:23.380 03:11:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81827 00:22:23.380 03:11:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.380 03:11:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 81827 00:22:24.755 0 00:22:24.755 03:11:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:24.755 [2024-07-13 03:11:23.490873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:24.755 [2024-07-13 03:11:23.491077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81750 ] 00:22:24.755 [2024-07-13 03:11:23.665340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.755 [2024-07-13 03:11:23.870631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.755 [2024-07-13 03:11:24.064533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:24.755 [2024-07-13 03:11:26.418301] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:24.755 [2024-07-13 03:11:26.418482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.755 [2024-07-13 03:11:26.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.755 [2024-07-13 03:11:26.418578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.755 [2024-07-13 03:11:26.418598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.755 [2024-07-13 03:11:26.418620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.755 [2024-07-13 03:11:26.418640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.755 [2024-07-13 03:11:26.418661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.755 [2024-07-13 03:11:26.418680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.755 [2024-07-13 03:11:26.418701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.755 [2024-07-13 03:11:26.418781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.755 [2024-07-13 03:11:26.418832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:24.755 [2024-07-13 03:11:26.427153] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:24.755 Running I/O for 1 seconds... 00:22:24.755 00:22:24.755 Latency(us) 00:22:24.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.755 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:24.755 Verification LBA range: start 0x0 length 0x4000 00:22:24.755 NVMe0n1 : 1.01 5219.52 20.39 0.00 0.00 24413.09 3470.43 21805.61 00:22:24.755 =================================================================================================================== 00:22:24.755 Total : 5219.52 20.39 0.00 0.00 24413.09 3470.43 21805.61 00:22:24.755 03:11:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.755 03:11:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:24.755 03:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.015 03:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.015 03:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:25.272 03:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.545 03:11:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:28.889 03:11:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.889 03:11:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 81750 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81750 ']' 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81750 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81750 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.889 killing process with pid 81750 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81750' 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81750 00:22:28.889 03:11:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81750 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.266 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.266 rmmod nvme_tcp 00:22:30.266 rmmod nvme_fabrics 00:22:30.266 rmmod nvme_keyring 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81490 ']' 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81490 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81490 ']' 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81490 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81490 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:30.525 killing process with pid 81490 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81490' 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81490 00:22:30.525 03:11:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81490 00:22:31.903 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.903 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.903 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:31.904 00:22:31.904 real 0m36.200s 00:22:31.904 user 2m18.159s 00:22:31.904 sys 0m5.584s 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.904 03:11:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 ************************************ 00:22:31.904 END TEST nvmf_failover 00:22:31.904 ************************************ 00:22:31.904 03:11:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.904 03:11:38 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:31.904 03:11:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:31.904 03:11:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.904 03:11:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.904 ************************************ 00:22:31.904 START TEST nvmf_host_discovery 00:22:31.904 ************************************ 00:22:31.904 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:31.904 * Looking for test storage... 00:22:32.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:32.163 Cannot find device "nvmf_tgt_br" 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.163 Cannot find device "nvmf_tgt_br2" 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:32.163 Cannot find device "nvmf_tgt_br" 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:32.163 Cannot find device "nvmf_tgt_br2" 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:32.163 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:32.164 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:32.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:22:32.423 00:22:32.423 --- 10.0.0.2 ping statistics --- 00:22:32.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.423 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:32.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:32.423 00:22:32.423 --- 10.0.0.3 ping statistics --- 00:22:32.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.423 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:32.423 00:22:32.423 --- 10.0.0.1 ping statistics --- 00:22:32.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.423 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=82115 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 82115 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82115 ']' 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.423 03:11:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.423 [2024-07-13 03:11:38.898667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:32.423 [2024-07-13 03:11:38.898910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.683 [2024-07-13 03:11:39.078862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.942 [2024-07-13 03:11:39.301774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.942 [2024-07-13 03:11:39.301867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.942 [2024-07-13 03:11:39.301883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.942 [2024-07-13 03:11:39.301907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.942 [2024-07-13 03:11:39.301936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.942 [2024-07-13 03:11:39.302003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.201 [2024-07-13 03:11:39.523839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 [2024-07-13 03:11:39.904154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 [2024-07-13 03:11:39.912348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 null0 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 null1 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82147 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82147 /tmp/host.sock 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82147 ']' 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.461 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.461 03:11:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.720 [2024-07-13 03:11:40.053264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:33.720 [2024-07-13 03:11:40.053443] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82147 ] 00:22:33.980 [2024-07-13 03:11:40.226248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.239 [2024-07-13 03:11:40.497499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.239 [2024-07-13 03:11:40.697298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:34.806 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 [2024-07-13 03:11:41.437463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.065 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:35.324 03:11:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:35.583 [2024-07-13 03:11:42.069126] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.583 [2024-07-13 03:11:42.069170] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.583 [2024-07-13 03:11:42.069213] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.583 [2024-07-13 03:11:42.075205] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:35.842 [2024-07-13 03:11:42.141965] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.842 [2024-07-13 03:11:42.142052] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.408 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.408 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.408 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.409 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.668 03:11:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 [2024-07-13 03:11:43.009022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:36.668 [2024-07-13 03:11:43.010161] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:36.668 [2024-07-13 03:11:43.010225] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.668 [2024-07-13 03:11:43.016188] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 [2024-07-13 03:11:43.082706] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.668 [2024-07-13 03:11:43.082749] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.668 [2024-07-13 03:11:43.082762] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.668 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.927 [2024-07-13 03:11:43.242538] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:36.927 [2024-07-13 03:11:43.242602] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.927 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.928 [2024-07-13 03:11:43.248059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.928 [2024-07-13 03:11:43.248111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.928 [2024-07-13 03:11:43.248132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.928 [2024-07-13 03:11:43.248147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.928 [2024-07-13 03:11:43.248161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.928 [2024-07-13 03:11:43.248175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.928 [2024-07-13 03:11:43.248191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.928 [2024-07-13 03:11:43.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.928 [2024-07-13 03:11:43.248217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:22:36.928 [2024-07-13 03:11:43.248558] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:36.928 [2024-07-13 03:11:43.248602] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:36.928 [2024-07-13 03:11:43.248701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.928 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.187 03:11:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.563 [2024-07-13 03:11:44.680552] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:38.563 [2024-07-13 03:11:44.680622] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:38.563 [2024-07-13 03:11:44.680653] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:38.563 [2024-07-13 03:11:44.686655] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:38.563 [2024-07-13 03:11:44.757607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:38.563 [2024-07-13 03:11:44.757681] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.563 request: 00:22:38.563 { 00:22:38.563 "name": "nvme", 00:22:38.563 "trtype": "tcp", 00:22:38.563 "traddr": "10.0.0.2", 00:22:38.563 "adrfam": "ipv4", 00:22:38.563 "trsvcid": "8009", 00:22:38.563 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:38.563 "wait_for_attach": true, 00:22:38.563 "method": "bdev_nvme_start_discovery", 00:22:38.563 "req_id": 1 00:22:38.563 } 00:22:38.563 Got JSON-RPC error response 00:22:38.563 response: 00:22:38.563 { 00:22:38.563 "code": -17, 00:22:38.563 "message": "File exists" 00:22:38.563 } 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.563 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.563 request: 00:22:38.563 { 00:22:38.563 "name": "nvme_second", 00:22:38.563 "trtype": "tcp", 00:22:38.564 "traddr": "10.0.0.2", 00:22:38.564 "adrfam": "ipv4", 00:22:38.564 "trsvcid": "8009", 00:22:38.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:38.564 "wait_for_attach": true, 00:22:38.564 "method": "bdev_nvme_start_discovery", 00:22:38.564 "req_id": 1 00:22:38.564 } 00:22:38.564 Got JSON-RPC error response 00:22:38.564 response: 00:22:38.564 { 00:22:38.564 "code": -17, 00:22:38.564 "message": "File exists" 00:22:38.564 } 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.564 03:11:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.564 03:11:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.940 [2024-07-13 03:11:46.034415] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.940 [2024-07-13 03:11:46.034541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:22:39.940 [2024-07-13 03:11:46.034627] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:39.940 [2024-07-13 03:11:46.034644] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:39.940 [2024-07-13 03:11:46.034659] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:40.875 [2024-07-13 03:11:47.034464] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.875 [2024-07-13 03:11:47.034560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:22:40.875 [2024-07-13 03:11:47.034664] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:40.875 [2024-07-13 03:11:47.034680] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:40.875 [2024-07-13 03:11:47.034694] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:41.811 [2024-07-13 03:11:48.034126] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:41.811 request: 00:22:41.811 { 00:22:41.811 "name": "nvme_second", 00:22:41.811 "trtype": "tcp", 00:22:41.811 "traddr": "10.0.0.2", 00:22:41.811 "adrfam": "ipv4", 00:22:41.811 "trsvcid": "8010", 00:22:41.811 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:41.811 "wait_for_attach": false, 00:22:41.811 "attach_timeout_ms": 3000, 00:22:41.811 "method": "bdev_nvme_start_discovery", 00:22:41.811 "req_id": 1 00:22:41.811 } 00:22:41.811 Got JSON-RPC error response 00:22:41.811 response: 00:22:41.811 { 00:22:41.811 "code": -110, 00:22:41.811 "message": "Connection timed out" 00:22:41.811 } 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82147 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.811 rmmod nvme_tcp 00:22:41.811 rmmod nvme_fabrics 00:22:41.811 rmmod nvme_keyring 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 82115 ']' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 82115 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 82115 ']' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 82115 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82115 00:22:41.811 killing process with pid 82115 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82115' 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 82115 00:22:41.811 03:11:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 82115 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:43.216 00:22:43.216 real 0m11.032s 00:22:43.216 user 0m21.325s 00:22:43.216 sys 0m2.051s 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.216 ************************************ 00:22:43.216 END TEST nvmf_host_discovery 00:22:43.216 ************************************ 00:22:43.216 03:11:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:43.216 03:11:49 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:43.216 03:11:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.216 03:11:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.216 03:11:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.216 ************************************ 00:22:43.216 START TEST nvmf_host_multipath_status 00:22:43.216 ************************************ 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:43.216 * Looking for test storage... 00:22:43.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.216 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:43.217 Cannot find device "nvmf_tgt_br" 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.217 Cannot find device "nvmf_tgt_br2" 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:43.217 Cannot find device "nvmf_tgt_br" 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:43.217 Cannot find device "nvmf_tgt_br2" 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:43.217 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:43.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:43.477 00:22:43.477 --- 10.0.0.2 ping statistics --- 00:22:43.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.477 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:43.477 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:43.477 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:43.477 00:22:43.477 --- 10.0.0.3 ping statistics --- 00:22:43.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.477 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:43.477 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:43.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:43.478 00:22:43.478 --- 10.0.0.1 ping statistics --- 00:22:43.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.478 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82609 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82609 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82609 ']' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.478 03:11:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:43.737 [2024-07-13 03:11:49.979398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:43.737 [2024-07-13 03:11:49.979559] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.737 [2024-07-13 03:11:50.157538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:43.996 [2024-07-13 03:11:50.388279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.996 [2024-07-13 03:11:50.388377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.996 [2024-07-13 03:11:50.388398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.996 [2024-07-13 03:11:50.388416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.996 [2024-07-13 03:11:50.388429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.996 [2024-07-13 03:11:50.388633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.996 [2024-07-13 03:11:50.388796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.255 [2024-07-13 03:11:50.566747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82609 00:22:44.514 03:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.774 [2024-07-13 03:11:51.188811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.774 03:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.033 Malloc0 00:22:45.033 03:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:45.292 03:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.551 03:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.810 [2024-07-13 03:11:52.202906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.810 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:46.069 [2024-07-13 03:11:52.419100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82659 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82659 /var/tmp/bdevperf.sock 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82659 ']' 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.070 03:11:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:47.005 03:11:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.005 03:11:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:47.005 03:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.263 03:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:47.522 Nvme0n1 00:22:47.522 03:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:48.090 Nvme0n1 00:22:48.090 03:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:48.090 03:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:49.993 03:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:49.993 03:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:50.251 03:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:50.510 03:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:51.447 03:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:51.447 03:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.447 03:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.447 03:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.705 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.705 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.705 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.705 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.964 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.964 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.964 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.964 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.223 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.223 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.223 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.223 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.481 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.481 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.740 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.740 03:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.740 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.740 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:52.740 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.740 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.997 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.997 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:52.997 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:53.562 03:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:53.562 03:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.934 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.192 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.192 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.192 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.192 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.449 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.449 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.449 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.449 03:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.706 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.706 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.706 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.706 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.964 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.964 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.964 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.964 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.221 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.221 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:56.221 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:56.481 03:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:56.738 03:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:57.698 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:57.698 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.698 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.698 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.957 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.957 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.957 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.957 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.523 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.523 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.523 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.523 03:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.523 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.523 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.523 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.523 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.782 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.782 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.782 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.782 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.041 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.041 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.041 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.041 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.608 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.608 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:59.608 03:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:59.608 03:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:59.866 03:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.239 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.497 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.497 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.497 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.497 03:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.754 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.755 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.755 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.755 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.012 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.012 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:02.012 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.012 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.270 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.270 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:02.270 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.270 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.528 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.528 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:02.528 03:12:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:02.786 03:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:03.044 03:12:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.417 03:12:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.675 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.675 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.675 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.675 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.932 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.932 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.932 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.932 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.191 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.191 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:05.191 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.191 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.449 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.449 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.449 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.449 03:12:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.707 03:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.707 03:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:05.707 03:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:05.966 03:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:06.225 03:12:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:07.158 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:07.158 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.158 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.158 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.415 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.415 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.415 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.415 03:12:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.673 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.673 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.673 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.673 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.932 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.932 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.932 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.932 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.190 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.190 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:08.190 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.190 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.449 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.449 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.449 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.449 03:12:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.707 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.707 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:08.966 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:08.966 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:09.534 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:09.534 03:12:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:10.912 03:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:10.912 03:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.912 03:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.912 03:12:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.912 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.912 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.912 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.912 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.170 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.170 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.170 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.170 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.428 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.428 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.428 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.428 03:12:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.687 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.687 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.687 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.687 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.953 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.953 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.953 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.953 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.221 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.221 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:12.221 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.791 03:12:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.791 03:12:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.169 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.427 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.427 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.427 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.428 03:12:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.687 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.687 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.687 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.687 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.946 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.946 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.946 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.946 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.205 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.205 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.205 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.205 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.773 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.773 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:15.773 03:12:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:15.773 03:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:16.032 03:12:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.405 03:12:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.664 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.664 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.664 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.664 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.922 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.922 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.922 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.922 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.180 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.180 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.180 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.180 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.438 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.438 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.438 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.438 03:12:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.696 03:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.696 03:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:18.696 03:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.955 03:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:19.213 03:12:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:20.148 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:20.148 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.148 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.148 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.407 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.407 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.407 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.407 03:12:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.666 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.666 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.666 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.666 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.233 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.800 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.800 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:21.800 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.800 03:12:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82659 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82659 ']' 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82659 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82659 00:23:21.800 killing process with pid 82659 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82659' 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82659 00:23:21.800 03:12:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82659 00:23:22.734 Connection closed with partial response: 00:23:22.734 00:23:22.734 00:23:22.999 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82659 00:23:22.999 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:22.999 [2024-07-13 03:11:52.523482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:22.999 [2024-07-13 03:11:52.523663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82659 ] 00:23:22.999 [2024-07-13 03:11:52.687758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.999 [2024-07-13 03:11:52.916666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.999 [2024-07-13 03:11:53.096678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:22.999 Running I/O for 90 seconds... 00:23:22.999 [2024-07-13 03:12:09.209329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.209938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.209987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.210969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.999 [2024-07-13 03:12:09.211386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.211455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.211697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.999 [2024-07-13 03:12:09.211796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:22.999 [2024-07-13 03:12:09.211840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.211862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.211908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.211931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.211960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.211982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.212920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.212979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.213005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.213061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.213115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.213951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.213975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.000 [2024-07-13 03:12:09.214039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.000 [2024-07-13 03:12:09.214381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:23.000 [2024-07-13 03:12:09.214411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.214487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.214946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.214969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.215526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.215953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.215989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.216445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.001 [2024-07-13 03:12:09.217484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.217598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.217681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.217791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.217856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.217921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.217978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.218006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.218049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.218137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.001 [2024-07-13 03:12:09.218167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:23.001 [2024-07-13 03:12:09.218210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:09.218685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:09.218709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.542262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.542840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.542911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.542946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.542970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.543225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.543282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.543717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.543771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.543824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.543946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.543969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.544001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.002 [2024-07-13 03:12:25.544024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.544055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:23.002 [2024-07-13 03:12:25.544109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.002 [2024-07-13 03:12:25.544132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.544547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.544613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.544671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.544834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.544905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.544952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.544978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.545088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.545143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.545196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.545250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.545315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.545372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.545427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.545481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.545512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.545536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.547682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.547750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.003 [2024-07-13 03:12:25.547808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.547862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.547937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.547969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.547991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:23.003 [2024-07-13 03:12:25.548022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.003 [2024-07-13 03:12:25.548046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:23.004 [2024-07-13 03:12:25.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.004 [2024-07-13 03:12:25.548100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:23.004 [2024-07-13 03:12:25.548148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.004 [2024-07-13 03:12:25.548173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:23.004 [2024-07-13 03:12:25.548205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.004 [2024-07-13 03:12:25.548228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:23.004 Received shutdown signal, test time was about 33.813766 seconds 00:23:23.004 00:23:23.004 Latency(us) 00:23:23.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.004 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.004 Verification LBA range: start 0x0 length 0x4000 00:23:23.004 Nvme0n1 : 33.81 6547.06 25.57 0.00 0.00 19507.84 189.91 4026531.84 00:23:23.004 =================================================================================================================== 00:23:23.004 Total : 6547.06 25.57 0.00 0.00 19507.84 189.91 4026531.84 00:23:23.004 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.262 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:23.262 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:23.262 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:23.262 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.262 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.520 rmmod nvme_tcp 00:23:23.520 rmmod nvme_fabrics 00:23:23.520 rmmod nvme_keyring 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82609 ']' 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82609 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82609 ']' 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82609 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82609 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82609' 00:23:23.520 killing process with pid 82609 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82609 00:23:23.520 03:12:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82609 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:24.895 ************************************ 00:23:24.895 END TEST nvmf_host_multipath_status 00:23:24.895 ************************************ 00:23:24.895 00:23:24.895 real 0m41.936s 00:23:24.895 user 2m14.119s 00:23:24.895 sys 0m10.922s 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.895 03:12:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 03:12:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.153 03:12:31 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:25.153 03:12:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.153 03:12:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.153 03:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 ************************************ 00:23:25.153 START TEST nvmf_discovery_remove_ifc 00:23:25.153 ************************************ 00:23:25.153 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:25.153 * Looking for test storage... 00:23:25.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:25.153 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:25.153 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:25.153 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:25.154 Cannot find device "nvmf_tgt_br" 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.154 Cannot find device "nvmf_tgt_br2" 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:25.154 Cannot find device "nvmf_tgt_br" 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:25.154 Cannot find device "nvmf_tgt_br2" 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:25.154 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:25.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:25.412 00:23:25.412 --- 10.0.0.2 ping statistics --- 00:23:25.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.412 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:25.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:25.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:23:25.412 00:23:25.412 --- 10.0.0.3 ping statistics --- 00:23:25.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.412 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:25.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:23:25.412 00:23:25.412 --- 10.0.0.1 ping statistics --- 00:23:25.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.412 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.412 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=83461 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 83461 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83461 ']' 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.413 03:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.671 [2024-07-13 03:12:32.010217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:25.671 [2024-07-13 03:12:32.010367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.929 [2024-07-13 03:12:32.187491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.188 [2024-07-13 03:12:32.437331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.188 [2024-07-13 03:12:32.437390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.188 [2024-07-13 03:12:32.437407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.188 [2024-07-13 03:12:32.437421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.188 [2024-07-13 03:12:32.437432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.188 [2024-07-13 03:12:32.437488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.188 [2024-07-13 03:12:32.646358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:26.811 03:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.811 03:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:26.811 03:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.811 03:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.811 03:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.811 [2024-07-13 03:12:33.043953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.811 [2024-07-13 03:12:33.052112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.811 null0 00:23:26.811 [2024-07-13 03:12:33.084278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83493 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83493 /tmp/host.sock 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83493 ']' 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:26.811 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.812 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:26.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:26.812 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.812 03:12:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.812 [2024-07-13 03:12:33.230945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:26.812 [2024-07-13 03:12:33.231104] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83493 ] 00:23:27.069 [2024-07-13 03:12:33.407110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.328 [2024-07-13 03:12:33.612782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.895 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.895 [2024-07-13 03:12:34.379445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:28.154 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.154 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:28.154 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.154 03:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.089 [2024-07-13 03:12:35.498917] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.089 [2024-07-13 03:12:35.499010] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.089 [2024-07-13 03:12:35.499046] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.089 [2024-07-13 03:12:35.505040] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.089 [2024-07-13 03:12:35.571365] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.089 [2024-07-13 03:12:35.571505] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.089 [2024-07-13 03:12:35.571616] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.089 [2024-07-13 03:12:35.571650] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.089 [2024-07-13 03:12:35.571689] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.089 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.090 [2024-07-13 03:12:35.577680] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:23:29.090 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.090 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.090 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.348 03:12:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.283 03:12:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.654 03:12:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.586 03:12:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.519 03:12:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.893 03:12:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.893 [2024-07-13 03:12:40.998852] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:34.893 [2024-07-13 03:12:40.999176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.893 [2024-07-13 03:12:40.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.893 [2024-07-13 03:12:40.999381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.893 [2024-07-13 03:12:40.999397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.893 [2024-07-13 03:12:40.999411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.893 [2024-07-13 03:12:40.999424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.893 [2024-07-13 03:12:40.999438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.893 [2024-07-13 03:12:40.999452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.893 [2024-07-13 03:12:40.999466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.894 [2024-07-13 03:12:40.999478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.894 [2024-07-13 03:12:40.999492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:23:34.894 [2024-07-13 03:12:41.008830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:34.894 [2024-07-13 03:12:41.018857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.894 03:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.894 03:12:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.830 [2024-07-13 03:12:42.033057] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:35.830 [2024-07-13 03:12:42.033204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:23:35.830 [2024-07-13 03:12:42.033265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:23:35.830 [2024-07-13 03:12:42.033385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:35.830 [2024-07-13 03:12:42.033556] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:35.830 [2024-07-13 03:12:42.033626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:35.830 [2024-07-13 03:12:42.033658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:35.830 [2024-07-13 03:12:42.033728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:35.830 [2024-07-13 03:12:42.033794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.830 [2024-07-13 03:12:42.033839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.830 03:12:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.767 [2024-07-13 03:12:43.034032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.767 [2024-07-13 03:12:43.034393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.767 [2024-07-13 03:12:43.034422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.767 [2024-07-13 03:12:43.034438] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:36.767 [2024-07-13 03:12:43.034480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.767 [2024-07-13 03:12:43.034543] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:36.767 [2024-07-13 03:12:43.034622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.767 [2024-07-13 03:12:43.034655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.767 [2024-07-13 03:12:43.034675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.767 [2024-07-13 03:12:43.034688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.767 [2024-07-13 03:12:43.034701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.767 [2024-07-13 03:12:43.034714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.767 [2024-07-13 03:12:43.034726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.767 [2024-07-13 03:12:43.034738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.767 [2024-07-13 03:12:43.034768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.767 [2024-07-13 03:12:43.034780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.767 [2024-07-13 03:12:43.034793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:36.767 [2024-07-13 03:12:43.034906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:36.767 [2024-07-13 03:12:43.035875] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:36.767 [2024-07-13 03:12:43.035940] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:36.767 03:12:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:38.144 03:12:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.712 [2024-07-13 03:12:45.046222] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.712 [2024-07-13 03:12:45.046277] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.712 [2024-07-13 03:12:45.046376] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.712 [2024-07-13 03:12:45.052316] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:38.712 [2024-07-13 03:12:45.118463] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:38.712 [2024-07-13 03:12:45.118568] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:38.712 [2024-07-13 03:12:45.118634] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:38.712 [2024-07-13 03:12:45.118661] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:38.712 [2024-07-13 03:12:45.118676] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.712 [2024-07-13 03:12:45.125160] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83493 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83493 ']' 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83493 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83493 00:23:38.971 killing process with pid 83493 00:23:38.971 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.972 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.972 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83493' 00:23:38.972 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83493 00:23:38.972 03:12:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83493 00:23:40.351 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.352 rmmod nvme_tcp 00:23:40.352 rmmod nvme_fabrics 00:23:40.352 rmmod nvme_keyring 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 83461 ']' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 83461 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83461 ']' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83461 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83461 00:23:40.352 killing process with pid 83461 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83461' 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83461 00:23:40.352 03:12:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83461 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.730 03:12:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.730 03:12:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:41.730 00:23:41.730 real 0m16.621s 00:23:41.730 user 0m28.174s 00:23:41.730 sys 0m2.561s 00:23:41.730 03:12:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.730 ************************************ 00:23:41.730 03:12:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.730 END TEST nvmf_discovery_remove_ifc 00:23:41.730 ************************************ 00:23:41.730 03:12:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:41.730 03:12:48 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:41.730 03:12:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:41.730 03:12:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.730 03:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.730 ************************************ 00:23:41.730 START TEST nvmf_identify_kernel_target 00:23:41.730 ************************************ 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:41.730 * Looking for test storage... 00:23:41.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.730 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:41.731 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:41.991 Cannot find device "nvmf_tgt_br" 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.991 Cannot find device "nvmf_tgt_br2" 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:41.991 Cannot find device "nvmf_tgt_br" 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:41.991 Cannot find device "nvmf_tgt_br2" 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.991 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:42.278 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:42.278 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:42.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:23:42.279 00:23:42.279 --- 10.0.0.2 ping statistics --- 00:23:42.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.279 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:42.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:42.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:42.279 00:23:42.279 --- 10.0.0.3 ping statistics --- 00:23:42.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.279 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:42.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:42.279 00:23:42.279 --- 10.0.0.1 ping statistics --- 00:23:42.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.279 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:42.279 03:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:42.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:42.538 Waiting for block devices as requested 00:23:42.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.821 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:42.821 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:42.822 No valid GPT data, bailing 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:42.822 No valid GPT data, bailing 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:42.822 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:43.081 No valid GPT data, bailing 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:43.081 No valid GPT data, bailing 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:43.081 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -a 10.0.0.1 -t tcp -s 4420 00:23:43.082 00:23:43.082 Discovery Log Number of Records 2, Generation counter 2 00:23:43.082 =====Discovery Log Entry 0====== 00:23:43.082 trtype: tcp 00:23:43.082 adrfam: ipv4 00:23:43.082 subtype: current discovery subsystem 00:23:43.082 treq: not specified, sq flow control disable supported 00:23:43.082 portid: 1 00:23:43.082 trsvcid: 4420 00:23:43.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:43.082 traddr: 10.0.0.1 00:23:43.082 eflags: none 00:23:43.082 sectype: none 00:23:43.082 =====Discovery Log Entry 1====== 00:23:43.082 trtype: tcp 00:23:43.082 adrfam: ipv4 00:23:43.082 subtype: nvme subsystem 00:23:43.082 treq: not specified, sq flow control disable supported 00:23:43.082 portid: 1 00:23:43.082 trsvcid: 4420 00:23:43.082 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:43.082 traddr: 10.0.0.1 00:23:43.082 eflags: none 00:23:43.082 sectype: none 00:23:43.082 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:43.082 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:43.341 ===================================================== 00:23:43.341 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:43.341 ===================================================== 00:23:43.341 Controller Capabilities/Features 00:23:43.341 ================================ 00:23:43.341 Vendor ID: 0000 00:23:43.341 Subsystem Vendor ID: 0000 00:23:43.341 Serial Number: 2e9805a6a029d3192015 00:23:43.341 Model Number: Linux 00:23:43.341 Firmware Version: 6.7.0-68 00:23:43.341 Recommended Arb Burst: 0 00:23:43.341 IEEE OUI Identifier: 00 00 00 00:23:43.341 Multi-path I/O 00:23:43.341 May have multiple subsystem ports: No 00:23:43.341 May have multiple controllers: No 00:23:43.341 Associated with SR-IOV VF: No 00:23:43.341 Max Data Transfer Size: Unlimited 00:23:43.341 Max Number of Namespaces: 0 00:23:43.341 Max Number of I/O Queues: 1024 00:23:43.341 NVMe Specification Version (VS): 1.3 00:23:43.341 NVMe Specification Version (Identify): 1.3 00:23:43.341 Maximum Queue Entries: 1024 00:23:43.341 Contiguous Queues Required: No 00:23:43.341 Arbitration Mechanisms Supported 00:23:43.341 Weighted Round Robin: Not Supported 00:23:43.341 Vendor Specific: Not Supported 00:23:43.341 Reset Timeout: 7500 ms 00:23:43.341 Doorbell Stride: 4 bytes 00:23:43.341 NVM Subsystem Reset: Not Supported 00:23:43.341 Command Sets Supported 00:23:43.341 NVM Command Set: Supported 00:23:43.341 Boot Partition: Not Supported 00:23:43.341 Memory Page Size Minimum: 4096 bytes 00:23:43.341 Memory Page Size Maximum: 4096 bytes 00:23:43.341 Persistent Memory Region: Not Supported 00:23:43.341 Optional Asynchronous Events Supported 00:23:43.341 Namespace Attribute Notices: Not Supported 00:23:43.341 Firmware Activation Notices: Not Supported 00:23:43.341 ANA Change Notices: Not Supported 00:23:43.341 PLE Aggregate Log Change Notices: Not Supported 00:23:43.341 LBA Status Info Alert Notices: Not Supported 00:23:43.341 EGE Aggregate Log Change Notices: Not Supported 00:23:43.341 Normal NVM Subsystem Shutdown event: Not Supported 00:23:43.341 Zone Descriptor Change Notices: Not Supported 00:23:43.341 Discovery Log Change Notices: Supported 00:23:43.341 Controller Attributes 00:23:43.341 128-bit Host Identifier: Not Supported 00:23:43.341 Non-Operational Permissive Mode: Not Supported 00:23:43.341 NVM Sets: Not Supported 00:23:43.341 Read Recovery Levels: Not Supported 00:23:43.341 Endurance Groups: Not Supported 00:23:43.341 Predictable Latency Mode: Not Supported 00:23:43.341 Traffic Based Keep ALive: Not Supported 00:23:43.341 Namespace Granularity: Not Supported 00:23:43.341 SQ Associations: Not Supported 00:23:43.341 UUID List: Not Supported 00:23:43.342 Multi-Domain Subsystem: Not Supported 00:23:43.342 Fixed Capacity Management: Not Supported 00:23:43.342 Variable Capacity Management: Not Supported 00:23:43.342 Delete Endurance Group: Not Supported 00:23:43.342 Delete NVM Set: Not Supported 00:23:43.342 Extended LBA Formats Supported: Not Supported 00:23:43.342 Flexible Data Placement Supported: Not Supported 00:23:43.342 00:23:43.342 Controller Memory Buffer Support 00:23:43.342 ================================ 00:23:43.342 Supported: No 00:23:43.342 00:23:43.342 Persistent Memory Region Support 00:23:43.342 ================================ 00:23:43.342 Supported: No 00:23:43.342 00:23:43.342 Admin Command Set Attributes 00:23:43.342 ============================ 00:23:43.342 Security Send/Receive: Not Supported 00:23:43.342 Format NVM: Not Supported 00:23:43.342 Firmware Activate/Download: Not Supported 00:23:43.342 Namespace Management: Not Supported 00:23:43.342 Device Self-Test: Not Supported 00:23:43.342 Directives: Not Supported 00:23:43.342 NVMe-MI: Not Supported 00:23:43.342 Virtualization Management: Not Supported 00:23:43.342 Doorbell Buffer Config: Not Supported 00:23:43.342 Get LBA Status Capability: Not Supported 00:23:43.342 Command & Feature Lockdown Capability: Not Supported 00:23:43.342 Abort Command Limit: 1 00:23:43.342 Async Event Request Limit: 1 00:23:43.342 Number of Firmware Slots: N/A 00:23:43.342 Firmware Slot 1 Read-Only: N/A 00:23:43.342 Firmware Activation Without Reset: N/A 00:23:43.342 Multiple Update Detection Support: N/A 00:23:43.342 Firmware Update Granularity: No Information Provided 00:23:43.342 Per-Namespace SMART Log: No 00:23:43.342 Asymmetric Namespace Access Log Page: Not Supported 00:23:43.342 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:43.342 Command Effects Log Page: Not Supported 00:23:43.342 Get Log Page Extended Data: Supported 00:23:43.342 Telemetry Log Pages: Not Supported 00:23:43.342 Persistent Event Log Pages: Not Supported 00:23:43.342 Supported Log Pages Log Page: May Support 00:23:43.342 Commands Supported & Effects Log Page: Not Supported 00:23:43.342 Feature Identifiers & Effects Log Page:May Support 00:23:43.342 NVMe-MI Commands & Effects Log Page: May Support 00:23:43.342 Data Area 4 for Telemetry Log: Not Supported 00:23:43.342 Error Log Page Entries Supported: 1 00:23:43.342 Keep Alive: Not Supported 00:23:43.342 00:23:43.342 NVM Command Set Attributes 00:23:43.342 ========================== 00:23:43.342 Submission Queue Entry Size 00:23:43.342 Max: 1 00:23:43.342 Min: 1 00:23:43.342 Completion Queue Entry Size 00:23:43.342 Max: 1 00:23:43.342 Min: 1 00:23:43.342 Number of Namespaces: 0 00:23:43.342 Compare Command: Not Supported 00:23:43.342 Write Uncorrectable Command: Not Supported 00:23:43.342 Dataset Management Command: Not Supported 00:23:43.342 Write Zeroes Command: Not Supported 00:23:43.342 Set Features Save Field: Not Supported 00:23:43.342 Reservations: Not Supported 00:23:43.342 Timestamp: Not Supported 00:23:43.342 Copy: Not Supported 00:23:43.342 Volatile Write Cache: Not Present 00:23:43.342 Atomic Write Unit (Normal): 1 00:23:43.342 Atomic Write Unit (PFail): 1 00:23:43.342 Atomic Compare & Write Unit: 1 00:23:43.342 Fused Compare & Write: Not Supported 00:23:43.342 Scatter-Gather List 00:23:43.342 SGL Command Set: Supported 00:23:43.342 SGL Keyed: Not Supported 00:23:43.342 SGL Bit Bucket Descriptor: Not Supported 00:23:43.342 SGL Metadata Pointer: Not Supported 00:23:43.342 Oversized SGL: Not Supported 00:23:43.342 SGL Metadata Address: Not Supported 00:23:43.342 SGL Offset: Supported 00:23:43.342 Transport SGL Data Block: Not Supported 00:23:43.342 Replay Protected Memory Block: Not Supported 00:23:43.342 00:23:43.342 Firmware Slot Information 00:23:43.342 ========================= 00:23:43.342 Active slot: 0 00:23:43.342 00:23:43.342 00:23:43.342 Error Log 00:23:43.342 ========= 00:23:43.342 00:23:43.342 Active Namespaces 00:23:43.342 ================= 00:23:43.342 Discovery Log Page 00:23:43.342 ================== 00:23:43.342 Generation Counter: 2 00:23:43.342 Number of Records: 2 00:23:43.342 Record Format: 0 00:23:43.342 00:23:43.342 Discovery Log Entry 0 00:23:43.342 ---------------------- 00:23:43.342 Transport Type: 3 (TCP) 00:23:43.342 Address Family: 1 (IPv4) 00:23:43.342 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:43.342 Entry Flags: 00:23:43.342 Duplicate Returned Information: 0 00:23:43.342 Explicit Persistent Connection Support for Discovery: 0 00:23:43.342 Transport Requirements: 00:23:43.342 Secure Channel: Not Specified 00:23:43.342 Port ID: 1 (0x0001) 00:23:43.342 Controller ID: 65535 (0xffff) 00:23:43.342 Admin Max SQ Size: 32 00:23:43.342 Transport Service Identifier: 4420 00:23:43.342 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:43.342 Transport Address: 10.0.0.1 00:23:43.342 Discovery Log Entry 1 00:23:43.342 ---------------------- 00:23:43.342 Transport Type: 3 (TCP) 00:23:43.342 Address Family: 1 (IPv4) 00:23:43.342 Subsystem Type: 2 (NVM Subsystem) 00:23:43.342 Entry Flags: 00:23:43.342 Duplicate Returned Information: 0 00:23:43.342 Explicit Persistent Connection Support for Discovery: 0 00:23:43.342 Transport Requirements: 00:23:43.342 Secure Channel: Not Specified 00:23:43.342 Port ID: 1 (0x0001) 00:23:43.342 Controller ID: 65535 (0xffff) 00:23:43.342 Admin Max SQ Size: 32 00:23:43.342 Transport Service Identifier: 4420 00:23:43.342 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:43.342 Transport Address: 10.0.0.1 00:23:43.342 03:12:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.602 get_feature(0x01) failed 00:23:43.602 get_feature(0x02) failed 00:23:43.602 get_feature(0x04) failed 00:23:43.602 ===================================================== 00:23:43.602 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:43.602 ===================================================== 00:23:43.602 Controller Capabilities/Features 00:23:43.602 ================================ 00:23:43.602 Vendor ID: 0000 00:23:43.602 Subsystem Vendor ID: 0000 00:23:43.602 Serial Number: 64533dc25fb2edd9dfae 00:23:43.602 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:43.602 Firmware Version: 6.7.0-68 00:23:43.602 Recommended Arb Burst: 6 00:23:43.602 IEEE OUI Identifier: 00 00 00 00:23:43.602 Multi-path I/O 00:23:43.602 May have multiple subsystem ports: Yes 00:23:43.602 May have multiple controllers: Yes 00:23:43.602 Associated with SR-IOV VF: No 00:23:43.602 Max Data Transfer Size: Unlimited 00:23:43.602 Max Number of Namespaces: 1024 00:23:43.602 Max Number of I/O Queues: 128 00:23:43.602 NVMe Specification Version (VS): 1.3 00:23:43.602 NVMe Specification Version (Identify): 1.3 00:23:43.602 Maximum Queue Entries: 1024 00:23:43.602 Contiguous Queues Required: No 00:23:43.602 Arbitration Mechanisms Supported 00:23:43.602 Weighted Round Robin: Not Supported 00:23:43.602 Vendor Specific: Not Supported 00:23:43.602 Reset Timeout: 7500 ms 00:23:43.602 Doorbell Stride: 4 bytes 00:23:43.602 NVM Subsystem Reset: Not Supported 00:23:43.602 Command Sets Supported 00:23:43.602 NVM Command Set: Supported 00:23:43.602 Boot Partition: Not Supported 00:23:43.602 Memory Page Size Minimum: 4096 bytes 00:23:43.602 Memory Page Size Maximum: 4096 bytes 00:23:43.602 Persistent Memory Region: Not Supported 00:23:43.602 Optional Asynchronous Events Supported 00:23:43.602 Namespace Attribute Notices: Supported 00:23:43.602 Firmware Activation Notices: Not Supported 00:23:43.602 ANA Change Notices: Supported 00:23:43.602 PLE Aggregate Log Change Notices: Not Supported 00:23:43.602 LBA Status Info Alert Notices: Not Supported 00:23:43.602 EGE Aggregate Log Change Notices: Not Supported 00:23:43.602 Normal NVM Subsystem Shutdown event: Not Supported 00:23:43.602 Zone Descriptor Change Notices: Not Supported 00:23:43.602 Discovery Log Change Notices: Not Supported 00:23:43.602 Controller Attributes 00:23:43.602 128-bit Host Identifier: Supported 00:23:43.602 Non-Operational Permissive Mode: Not Supported 00:23:43.602 NVM Sets: Not Supported 00:23:43.602 Read Recovery Levels: Not Supported 00:23:43.602 Endurance Groups: Not Supported 00:23:43.602 Predictable Latency Mode: Not Supported 00:23:43.602 Traffic Based Keep ALive: Supported 00:23:43.602 Namespace Granularity: Not Supported 00:23:43.602 SQ Associations: Not Supported 00:23:43.602 UUID List: Not Supported 00:23:43.602 Multi-Domain Subsystem: Not Supported 00:23:43.602 Fixed Capacity Management: Not Supported 00:23:43.602 Variable Capacity Management: Not Supported 00:23:43.603 Delete Endurance Group: Not Supported 00:23:43.603 Delete NVM Set: Not Supported 00:23:43.603 Extended LBA Formats Supported: Not Supported 00:23:43.603 Flexible Data Placement Supported: Not Supported 00:23:43.603 00:23:43.603 Controller Memory Buffer Support 00:23:43.603 ================================ 00:23:43.603 Supported: No 00:23:43.603 00:23:43.603 Persistent Memory Region Support 00:23:43.603 ================================ 00:23:43.603 Supported: No 00:23:43.603 00:23:43.603 Admin Command Set Attributes 00:23:43.603 ============================ 00:23:43.603 Security Send/Receive: Not Supported 00:23:43.603 Format NVM: Not Supported 00:23:43.603 Firmware Activate/Download: Not Supported 00:23:43.603 Namespace Management: Not Supported 00:23:43.603 Device Self-Test: Not Supported 00:23:43.603 Directives: Not Supported 00:23:43.603 NVMe-MI: Not Supported 00:23:43.603 Virtualization Management: Not Supported 00:23:43.603 Doorbell Buffer Config: Not Supported 00:23:43.603 Get LBA Status Capability: Not Supported 00:23:43.603 Command & Feature Lockdown Capability: Not Supported 00:23:43.603 Abort Command Limit: 4 00:23:43.603 Async Event Request Limit: 4 00:23:43.603 Number of Firmware Slots: N/A 00:23:43.603 Firmware Slot 1 Read-Only: N/A 00:23:43.603 Firmware Activation Without Reset: N/A 00:23:43.603 Multiple Update Detection Support: N/A 00:23:43.603 Firmware Update Granularity: No Information Provided 00:23:43.603 Per-Namespace SMART Log: Yes 00:23:43.603 Asymmetric Namespace Access Log Page: Supported 00:23:43.603 ANA Transition Time : 10 sec 00:23:43.603 00:23:43.603 Asymmetric Namespace Access Capabilities 00:23:43.603 ANA Optimized State : Supported 00:23:43.603 ANA Non-Optimized State : Supported 00:23:43.603 ANA Inaccessible State : Supported 00:23:43.603 ANA Persistent Loss State : Supported 00:23:43.603 ANA Change State : Supported 00:23:43.603 ANAGRPID is not changed : No 00:23:43.603 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:43.603 00:23:43.603 ANA Group Identifier Maximum : 128 00:23:43.603 Number of ANA Group Identifiers : 128 00:23:43.603 Max Number of Allowed Namespaces : 1024 00:23:43.603 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:43.603 Command Effects Log Page: Supported 00:23:43.603 Get Log Page Extended Data: Supported 00:23:43.603 Telemetry Log Pages: Not Supported 00:23:43.603 Persistent Event Log Pages: Not Supported 00:23:43.603 Supported Log Pages Log Page: May Support 00:23:43.603 Commands Supported & Effects Log Page: Not Supported 00:23:43.603 Feature Identifiers & Effects Log Page:May Support 00:23:43.603 NVMe-MI Commands & Effects Log Page: May Support 00:23:43.603 Data Area 4 for Telemetry Log: Not Supported 00:23:43.603 Error Log Page Entries Supported: 128 00:23:43.603 Keep Alive: Supported 00:23:43.603 Keep Alive Granularity: 1000 ms 00:23:43.603 00:23:43.603 NVM Command Set Attributes 00:23:43.603 ========================== 00:23:43.603 Submission Queue Entry Size 00:23:43.603 Max: 64 00:23:43.603 Min: 64 00:23:43.603 Completion Queue Entry Size 00:23:43.603 Max: 16 00:23:43.603 Min: 16 00:23:43.603 Number of Namespaces: 1024 00:23:43.603 Compare Command: Not Supported 00:23:43.603 Write Uncorrectable Command: Not Supported 00:23:43.603 Dataset Management Command: Supported 00:23:43.603 Write Zeroes Command: Supported 00:23:43.603 Set Features Save Field: Not Supported 00:23:43.603 Reservations: Not Supported 00:23:43.603 Timestamp: Not Supported 00:23:43.603 Copy: Not Supported 00:23:43.603 Volatile Write Cache: Present 00:23:43.603 Atomic Write Unit (Normal): 1 00:23:43.603 Atomic Write Unit (PFail): 1 00:23:43.603 Atomic Compare & Write Unit: 1 00:23:43.603 Fused Compare & Write: Not Supported 00:23:43.603 Scatter-Gather List 00:23:43.603 SGL Command Set: Supported 00:23:43.603 SGL Keyed: Not Supported 00:23:43.603 SGL Bit Bucket Descriptor: Not Supported 00:23:43.603 SGL Metadata Pointer: Not Supported 00:23:43.603 Oversized SGL: Not Supported 00:23:43.603 SGL Metadata Address: Not Supported 00:23:43.603 SGL Offset: Supported 00:23:43.603 Transport SGL Data Block: Not Supported 00:23:43.603 Replay Protected Memory Block: Not Supported 00:23:43.603 00:23:43.603 Firmware Slot Information 00:23:43.603 ========================= 00:23:43.603 Active slot: 0 00:23:43.603 00:23:43.603 Asymmetric Namespace Access 00:23:43.603 =========================== 00:23:43.603 Change Count : 0 00:23:43.603 Number of ANA Group Descriptors : 1 00:23:43.603 ANA Group Descriptor : 0 00:23:43.603 ANA Group ID : 1 00:23:43.603 Number of NSID Values : 1 00:23:43.603 Change Count : 0 00:23:43.603 ANA State : 1 00:23:43.603 Namespace Identifier : 1 00:23:43.603 00:23:43.603 Commands Supported and Effects 00:23:43.603 ============================== 00:23:43.603 Admin Commands 00:23:43.603 -------------- 00:23:43.603 Get Log Page (02h): Supported 00:23:43.603 Identify (06h): Supported 00:23:43.603 Abort (08h): Supported 00:23:43.603 Set Features (09h): Supported 00:23:43.603 Get Features (0Ah): Supported 00:23:43.603 Asynchronous Event Request (0Ch): Supported 00:23:43.603 Keep Alive (18h): Supported 00:23:43.603 I/O Commands 00:23:43.603 ------------ 00:23:43.603 Flush (00h): Supported 00:23:43.603 Write (01h): Supported LBA-Change 00:23:43.603 Read (02h): Supported 00:23:43.603 Write Zeroes (08h): Supported LBA-Change 00:23:43.603 Dataset Management (09h): Supported 00:23:43.603 00:23:43.603 Error Log 00:23:43.603 ========= 00:23:43.603 Entry: 0 00:23:43.603 Error Count: 0x3 00:23:43.603 Submission Queue Id: 0x0 00:23:43.603 Command Id: 0x5 00:23:43.603 Phase Bit: 0 00:23:43.603 Status Code: 0x2 00:23:43.603 Status Code Type: 0x0 00:23:43.603 Do Not Retry: 1 00:23:43.603 Error Location: 0x28 00:23:43.603 LBA: 0x0 00:23:43.603 Namespace: 0x0 00:23:43.603 Vendor Log Page: 0x0 00:23:43.603 ----------- 00:23:43.603 Entry: 1 00:23:43.603 Error Count: 0x2 00:23:43.603 Submission Queue Id: 0x0 00:23:43.603 Command Id: 0x5 00:23:43.603 Phase Bit: 0 00:23:43.603 Status Code: 0x2 00:23:43.603 Status Code Type: 0x0 00:23:43.603 Do Not Retry: 1 00:23:43.603 Error Location: 0x28 00:23:43.603 LBA: 0x0 00:23:43.603 Namespace: 0x0 00:23:43.603 Vendor Log Page: 0x0 00:23:43.603 ----------- 00:23:43.603 Entry: 2 00:23:43.603 Error Count: 0x1 00:23:43.603 Submission Queue Id: 0x0 00:23:43.603 Command Id: 0x4 00:23:43.603 Phase Bit: 0 00:23:43.603 Status Code: 0x2 00:23:43.603 Status Code Type: 0x0 00:23:43.603 Do Not Retry: 1 00:23:43.603 Error Location: 0x28 00:23:43.603 LBA: 0x0 00:23:43.603 Namespace: 0x0 00:23:43.603 Vendor Log Page: 0x0 00:23:43.603 00:23:43.603 Number of Queues 00:23:43.603 ================ 00:23:43.603 Number of I/O Submission Queues: 128 00:23:43.603 Number of I/O Completion Queues: 128 00:23:43.603 00:23:43.603 ZNS Specific Controller Data 00:23:43.603 ============================ 00:23:43.603 Zone Append Size Limit: 0 00:23:43.603 00:23:43.603 00:23:43.603 Active Namespaces 00:23:43.603 ================= 00:23:43.603 get_feature(0x05) failed 00:23:43.603 Namespace ID:1 00:23:43.603 Command Set Identifier: NVM (00h) 00:23:43.603 Deallocate: Supported 00:23:43.603 Deallocated/Unwritten Error: Not Supported 00:23:43.603 Deallocated Read Value: Unknown 00:23:43.603 Deallocate in Write Zeroes: Not Supported 00:23:43.603 Deallocated Guard Field: 0xFFFF 00:23:43.603 Flush: Supported 00:23:43.603 Reservation: Not Supported 00:23:43.603 Namespace Sharing Capabilities: Multiple Controllers 00:23:43.603 Size (in LBAs): 1310720 (5GiB) 00:23:43.603 Capacity (in LBAs): 1310720 (5GiB) 00:23:43.603 Utilization (in LBAs): 1310720 (5GiB) 00:23:43.603 UUID: 987e7102-d3db-4f06-adab-8e0ad26badd7 00:23:43.603 Thin Provisioning: Not Supported 00:23:43.603 Per-NS Atomic Units: Yes 00:23:43.603 Atomic Boundary Size (Normal): 0 00:23:43.603 Atomic Boundary Size (PFail): 0 00:23:43.603 Atomic Boundary Offset: 0 00:23:43.603 NGUID/EUI64 Never Reused: No 00:23:43.603 ANA group ID: 1 00:23:43.603 Namespace Write Protected: No 00:23:43.603 Number of LBA Formats: 1 00:23:43.603 Current LBA Format: LBA Format #00 00:23:43.603 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:43.603 00:23:43.603 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:43.603 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.603 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.862 rmmod nvme_tcp 00:23:43.862 rmmod nvme_fabrics 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:43.862 03:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:44.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:44.797 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.797 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.797 ************************************ 00:23:44.797 END TEST nvmf_identify_kernel_target 00:23:44.797 ************************************ 00:23:44.797 00:23:44.797 real 0m3.114s 00:23:44.797 user 0m1.130s 00:23:44.797 sys 0m1.467s 00:23:44.797 03:12:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:44.797 03:12:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.797 03:12:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:44.797 03:12:51 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:44.797 03:12:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:44.797 03:12:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:44.797 03:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:44.797 ************************************ 00:23:44.797 START TEST nvmf_auth_host 00:23:44.797 ************************************ 00:23:44.797 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:45.055 * Looking for test storage... 00:23:45.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.055 03:12:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:45.056 Cannot find device "nvmf_tgt_br" 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:45.056 Cannot find device "nvmf_tgt_br2" 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:45.056 Cannot find device "nvmf_tgt_br" 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:45.056 Cannot find device "nvmf_tgt_br2" 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:45.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:45.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:45.056 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:45.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:23:45.315 00:23:45.315 --- 10.0.0.2 ping statistics --- 00:23:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.315 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:45.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:45.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:23:45.315 00:23:45.315 --- 10.0.0.3 ping statistics --- 00:23:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.315 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:45.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:45.315 00:23:45.315 --- 10.0.0.1 ping statistics --- 00:23:45.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.315 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=84408 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 84408 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84408 ']' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.315 03:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d77640a49a161c2d9b796d1602fa9b4e 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nKy 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d77640a49a161c2d9b796d1602fa9b4e 0 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d77640a49a161c2d9b796d1602fa9b4e 0 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d77640a49a161c2d9b796d1602fa9b4e 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nKy 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nKy 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nKy 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99ec10d43f091f6704b3b4f813fede649f3bbac85feb9168a2cd74ea8bfba749 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AcH 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99ec10d43f091f6704b3b4f813fede649f3bbac85feb9168a2cd74ea8bfba749 3 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99ec10d43f091f6704b3b4f813fede649f3bbac85feb9168a2cd74ea8bfba749 3 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99ec10d43f091f6704b3b4f813fede649f3bbac85feb9168a2cd74ea8bfba749 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:46.690 03:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AcH 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AcH 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AcH 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=55bdd712979a8ac55c7c09277cf1c7219a7b915ab20e5fd1 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jdJ 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 55bdd712979a8ac55c7c09277cf1c7219a7b915ab20e5fd1 0 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 55bdd712979a8ac55c7c09277cf1c7219a7b915ab20e5fd1 0 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=55bdd712979a8ac55c7c09277cf1c7219a7b915ab20e5fd1 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jdJ 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jdJ 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jdJ 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.690 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52c6e5152cc2fd6a73de1f3698883f781c67a38d3b8849e7 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X9s 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52c6e5152cc2fd6a73de1f3698883f781c67a38d3b8849e7 2 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52c6e5152cc2fd6a73de1f3698883f781c67a38d3b8849e7 2 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52c6e5152cc2fd6a73de1f3698883f781c67a38d3b8849e7 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:46.691 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X9s 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X9s 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.X9s 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bb04f0e75214e62370a06aa85ee480f 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.o0h 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bb04f0e75214e62370a06aa85ee480f 1 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bb04f0e75214e62370a06aa85ee480f 1 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.949 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bb04f0e75214e62370a06aa85ee480f 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.o0h 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.o0h 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.o0h 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b88f36e201ad481332a892530cc1684f 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hmd 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b88f36e201ad481332a892530cc1684f 1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b88f36e201ad481332a892530cc1684f 1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b88f36e201ad481332a892530cc1684f 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hmd 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hmd 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hmd 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=694b2d5290eb6428c7c2374fd81d6cfc5144a576479b7382 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WT8 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 694b2d5290eb6428c7c2374fd81d6cfc5144a576479b7382 2 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 694b2d5290eb6428c7c2374fd81d6cfc5144a576479b7382 2 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=694b2d5290eb6428c7c2374fd81d6cfc5144a576479b7382 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WT8 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WT8 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WT8 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=893aeaaded942b7b86e04113f8495df6 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sYg 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 893aeaaded942b7b86e04113f8495df6 0 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 893aeaaded942b7b86e04113f8495df6 0 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=893aeaaded942b7b86e04113f8495df6 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.950 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sYg 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sYg 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sYg 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3dcdcbd8cf552aba0cd1d382d7d3f096c4b879c243a4d32110bf0b635669194e 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cbm 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3dcdcbd8cf552aba0cd1d382d7d3f096c4b879c243a4d32110bf0b635669194e 3 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3dcdcbd8cf552aba0cd1d382d7d3f096c4b879c243a4d32110bf0b635669194e 3 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3dcdcbd8cf552aba0cd1d382d7d3f096c4b879c243a4d32110bf0b635669194e 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cbm 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cbm 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cbm 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84408 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84408 ']' 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.208 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nKy 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AcH ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AcH 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jdJ 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.X9s ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X9s 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.o0h 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hmd ]] 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hmd 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.466 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WT8 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sYg ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sYg 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cbm 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:47.467 03:12:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:48.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:48.032 Waiting for block devices as requested 00:23:48.032 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:48.032 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:48.965 No valid GPT data, bailing 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:48.965 No valid GPT data, bailing 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:48.965 No valid GPT data, bailing 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:48.965 No valid GPT data, bailing 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:48.965 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -a 10.0.0.1 -t tcp -s 4420 00:23:48.965 00:23:48.965 Discovery Log Number of Records 2, Generation counter 2 00:23:48.965 =====Discovery Log Entry 0====== 00:23:48.965 trtype: tcp 00:23:48.966 adrfam: ipv4 00:23:48.966 subtype: current discovery subsystem 00:23:48.966 treq: not specified, sq flow control disable supported 00:23:48.966 portid: 1 00:23:48.966 trsvcid: 4420 00:23:48.966 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:48.966 traddr: 10.0.0.1 00:23:48.966 eflags: none 00:23:48.966 sectype: none 00:23:48.966 =====Discovery Log Entry 1====== 00:23:48.966 trtype: tcp 00:23:48.966 adrfam: ipv4 00:23:48.966 subtype: nvme subsystem 00:23:48.966 treq: not specified, sq flow control disable supported 00:23:48.966 portid: 1 00:23:48.966 trsvcid: 4420 00:23:48.966 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:48.966 traddr: 10.0.0.1 00:23:48.966 eflags: none 00:23:48.966 sectype: none 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.223 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.224 nvme0n1 00:23:49.224 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.483 nvme0n1 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.483 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.742 03:12:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.742 nvme0n1 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.742 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.743 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.001 nvme0n1 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.001 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.002 nvme0n1 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.002 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.260 nvme0n1 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.260 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.518 03:12:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.776 nvme0n1 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.776 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.034 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.034 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.034 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 nvme0n1 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.035 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.294 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.294 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.294 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:51.294 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.294 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.295 nvme0n1 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.295 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.555 nvme0n1 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.555 03:12:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.814 nvme0n1 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.814 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.380 03:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.638 nvme0n1 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.638 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:52.896 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.897 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.156 nvme0n1 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.156 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.415 nvme0n1 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.415 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.416 03:12:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.675 nvme0n1 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.675 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.934 nvme0n1 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.934 03:13:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.466 nvme0n1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.466 03:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.035 nvme0n1 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.035 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.036 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.293 nvme0n1 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.294 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.552 03:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.811 nvme0n1 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.811 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.070 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.328 nvme0n1 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.328 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.587 03:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.154 nvme0n1 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.154 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.412 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.412 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.412 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.413 03:13:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.981 nvme0n1 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.981 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.240 03:13:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.808 nvme0n1 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.808 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.067 03:13:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.635 nvme0n1 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.635 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 nvme0n1 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.572 03:13:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 nvme0n1 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.572 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.832 nvme0n1 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.832 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.833 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.092 nvme0n1 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.092 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.351 nvme0n1 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.351 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.352 nvme0n1 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.352 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.611 nvme0n1 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.611 03:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.611 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.611 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.611 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.612 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.871 nvme0n1 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.871 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.130 nvme0n1 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.130 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.131 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.390 nvme0n1 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.390 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.391 nvme0n1 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.391 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.650 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.651 03:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.910 nvme0n1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.910 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.168 nvme0n1 00:24:05.168 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.168 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.168 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.169 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.427 nvme0n1 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.427 03:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.684 nvme0n1 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.684 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.941 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.941 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:05.941 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.941 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.942 nvme0n1 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.942 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.199 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.200 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.457 nvme0n1 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.457 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.716 03:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.716 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.975 nvme0n1 00:24:06.975 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.975 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.975 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.975 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.975 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.976 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.234 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.234 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.234 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.235 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.494 nvme0n1 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.494 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.752 03:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.752 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.009 nvme0n1 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.009 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.010 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.010 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.010 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.010 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.575 nvme0n1 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.575 03:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.510 nvme0n1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.510 03:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.078 nvme0n1 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.078 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.337 03:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.903 nvme0n1 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.903 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.162 03:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.751 nvme0n1 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.751 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.008 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.575 nvme0n1 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.575 03:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:12.575 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.576 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.836 nvme0n1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.836 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.095 nvme0n1 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.095 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 nvme0n1 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.354 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.354 nvme0n1 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.355 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.614 nvme0n1 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:13.614 03:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.614 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.873 nvme0n1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.873 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 nvme0n1 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 nvme0n1 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.392 nvme0n1 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.392 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.651 03:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.652 nvme0n1 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.652 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.910 nvme0n1 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.910 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.168 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.169 nvme0n1 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.169 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.427 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.686 nvme0n1 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.686 03:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.686 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.686 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.686 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.686 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.686 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.687 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 nvme0n1 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.947 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.948 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.948 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.948 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.948 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.207 nvme0n1 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.207 03:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 nvme0n1 00:24:16.773 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.773 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.773 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.773 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.774 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 nvme0n1 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.032 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:17.290 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.291 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.549 nvme0n1 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.549 03:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.549 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.116 nvme0n1 00:24:18.116 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.116 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.117 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.377 nvme0n1 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3NjQwYTQ5YTE2MWMyZDliNzk2ZDE2MDJmYTliNGVpJtHP: 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTllYzEwZDQzZjA5MWY2NzA0YjNiNGY4MTNmZWRlNjQ5ZjNiYmFjODVmZWI5MTY4YTJjZDc0ZWE4YmZiYTc0ObLc/YE=: 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.377 03:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.945 nvme0n1 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.945 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.203 03:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.770 nvme0n1 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.770 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWJiMDRmMGU3NTIxNGU2MjM3MGEwNmFhODVlZTQ4MGaYeelL: 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: ]] 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjg4ZjM2ZTIwMWFkNDgxMzMyYTg5MjUzMGNjMTY4NGY+sVP5: 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.771 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.339 nvme0n1 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk0YjJkNTI5MGViNjQyOGM3YzIzNzRmZDgxZDZjZmM1MTQ0YTU3NjQ3OWI3MzgyQkWY7A==: 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkzYWVhYWRlZDk0MmI3Yjg2ZTA0MTEzZjg0OTVkZjaLiHWA: 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.339 03:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.907 nvme0n1 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.907 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2RjZGNiZDhjZjU1MmFiYTBjZDFkMzgyZDdkM2YwOTZjNGI4NzljMjQzYTRkMzIxMTBiZjBiNjM1NjY5MTk0ZeBgXrk=: 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.908 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.477 nvme0n1 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.477 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTViZGQ3MTI5NzlhOGFjNTVjN2MwOTI3N2NmMWM3MjE5YTdiOTE1YWIyMGU1ZmQx8Bqojw==: 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: ]] 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTJjNmU1MTUyY2MyZmQ2YTczZGUxZjM2OTg4ODNmNzgxYzY3YTM4ZDNiODg0OWU3a2gupA==: 00:24:21.736 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.737 03:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 request: 00:24:21.737 { 00:24:21.737 "name": "nvme0", 00:24:21.737 "trtype": "tcp", 00:24:21.737 "traddr": "10.0.0.1", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "4420", 00:24:21.737 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:21.737 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:21.737 "prchk_reftag": false, 00:24:21.737 "prchk_guard": false, 00:24:21.737 "hdgst": false, 00:24:21.737 "ddgst": false, 00:24:21.737 "method": "bdev_nvme_attach_controller", 00:24:21.737 "req_id": 1 00:24:21.737 } 00:24:21.737 Got JSON-RPC error response 00:24:21.737 response: 00:24:21.737 { 00:24:21.737 "code": -5, 00:24:21.737 "message": "Input/output error" 00:24:21.737 } 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 request: 00:24:21.737 { 00:24:21.737 "name": "nvme0", 00:24:21.737 "trtype": "tcp", 00:24:21.737 "traddr": "10.0.0.1", 00:24:21.737 "adrfam": "ipv4", 00:24:21.737 "trsvcid": "4420", 00:24:21.737 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:21.737 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:21.737 "prchk_reftag": false, 00:24:21.737 "prchk_guard": false, 00:24:21.737 "hdgst": false, 00:24:21.737 "ddgst": false, 00:24:21.737 "dhchap_key": "key2", 00:24:21.737 "method": "bdev_nvme_attach_controller", 00:24:21.737 "req_id": 1 00:24:21.737 } 00:24:21.737 Got JSON-RPC error response 00:24:21.737 response: 00:24:21.737 { 00:24:21.737 "code": -5, 00:24:21.737 "message": "Input/output error" 00:24:21.737 } 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.737 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.997 request: 00:24:21.997 { 00:24:21.997 "name": "nvme0", 00:24:21.997 "trtype": "tcp", 00:24:21.997 "traddr": "10.0.0.1", 00:24:21.997 "adrfam": "ipv4", 00:24:21.997 "trsvcid": "4420", 00:24:21.997 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:21.997 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:21.997 "prchk_reftag": false, 00:24:21.997 "prchk_guard": false, 00:24:21.997 "hdgst": false, 00:24:21.997 "ddgst": false, 00:24:21.997 "dhchap_key": "key1", 00:24:21.997 "dhchap_ctrlr_key": "ckey2", 00:24:21.997 "method": "bdev_nvme_attach_controller", 00:24:21.997 "req_id": 1 00:24:21.997 } 00:24:21.997 Got JSON-RPC error response 00:24:21.997 response: 00:24:21.997 { 00:24:21.997 "code": -5, 00:24:21.997 "message": "Input/output error" 00:24:21.997 } 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.997 rmmod nvme_tcp 00:24:21.997 rmmod nvme_fabrics 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 84408 ']' 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 84408 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 84408 ']' 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 84408 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84408 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.997 killing process with pid 84408 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84408' 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 84408 00:24:21.997 03:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 84408 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:22.934 03:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:23.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:23.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:23.761 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:23.761 03:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nKy /tmp/spdk.key-null.jdJ /tmp/spdk.key-sha256.o0h /tmp/spdk.key-sha384.WT8 /tmp/spdk.key-sha512.cbm /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:23.761 03:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:24.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:24.020 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:24.020 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:24.020 00:24:24.020 real 0m39.246s 00:24:24.020 user 0m34.511s 00:24:24.020 sys 0m4.233s 00:24:24.020 03:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:24.020 03:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.020 ************************************ 00:24:24.020 END TEST nvmf_auth_host 00:24:24.020 ************************************ 00:24:24.279 03:13:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:24.279 03:13:30 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:24.279 03:13:30 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:24.279 03:13:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:24.279 03:13:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.279 03:13:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.279 ************************************ 00:24:24.279 START TEST nvmf_digest 00:24:24.279 ************************************ 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:24.279 * Looking for test storage... 00:24:24.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.279 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:24.280 Cannot find device "nvmf_tgt_br" 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:24.280 Cannot find device "nvmf_tgt_br2" 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:24.280 Cannot find device "nvmf_tgt_br" 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:24.280 Cannot find device "nvmf_tgt_br2" 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:24.280 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:24.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:24.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:24.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:24:24.539 00:24:24.539 --- 10.0.0.2 ping statistics --- 00:24:24.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.539 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:24.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:24.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:24.539 00:24:24.539 --- 10.0.0.3 ping statistics --- 00:24:24.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.539 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:24.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:24.539 00:24:24.539 --- 10.0.0.1 ping statistics --- 00:24:24.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.539 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.539 03:13:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:24.539 ************************************ 00:24:24.539 START TEST nvmf_digest_clean 00:24:24.539 ************************************ 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:24.539 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=86007 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 86007 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86007 ']' 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.540 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:24.798 [2024-07-13 03:13:31.141601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:24.798 [2024-07-13 03:13:31.141790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.068 [2024-07-13 03:13:31.320582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.068 [2024-07-13 03:13:31.546560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.068 [2024-07-13 03:13:31.546623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.068 [2024-07-13 03:13:31.546641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.068 [2024-07-13 03:13:31.546655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.068 [2024-07-13 03:13:31.546666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.068 [2024-07-13 03:13:31.546706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.646 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.646 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:25.646 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.646 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.646 03:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.646 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 [2024-07-13 03:13:32.239114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:25.905 null0 00:24:25.905 [2024-07-13 03:13:32.357740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.905 [2024-07-13 03:13:32.381856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86039 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86039 /var/tmp/bperf.sock 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86039 ']' 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.905 03:13:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:26.163 [2024-07-13 03:13:32.503607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:26.163 [2024-07-13 03:13:32.503852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86039 ] 00:24:26.421 [2024-07-13 03:13:32.681009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.421 [2024-07-13 03:13:32.907128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.985 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.985 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:26.985 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:26.985 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:26.986 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:27.552 [2024-07-13 03:13:33.864568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:27.552 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:27.552 03:13:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.116 nvme0n1 00:24:28.116 03:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:28.116 03:13:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.116 Running I/O for 2 seconds... 00:24:30.015 00:24:30.015 Latency(us) 00:24:30.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:30.015 nvme0n1 : 2.00 13438.10 52.49 0.00 0.00 9517.42 8340.95 22997.18 00:24:30.015 =================================================================================================================== 00:24:30.015 Total : 13438.10 52.49 0.00 0.00 9517.42 8340.95 22997.18 00:24:30.015 0 00:24:30.015 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:30.015 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:30.015 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:30.015 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:30.015 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:30.015 | select(.opcode=="crc32c") 00:24:30.015 | "\(.module_name) \(.executed)"' 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86039 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86039 ']' 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86039 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.274 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86039 00:24:30.531 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:30.531 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:30.531 killing process with pid 86039 00:24:30.531 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86039' 00:24:30.531 Received shutdown signal, test time was about 2.000000 seconds 00:24:30.531 00:24:30.531 Latency(us) 00:24:30.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.531 =================================================================================================================== 00:24:30.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.531 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86039 00:24:30.531 03:13:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86039 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86107 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86107 /var/tmp/bperf.sock 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86107 ']' 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.464 03:13:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.464 [2024-07-13 03:13:37.784930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:31.464 Zero copy mechanism will not be used. 00:24:31.464 [2024-07-13 03:13:37.785126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86107 ] 00:24:31.464 [2024-07-13 03:13:37.946468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.721 [2024-07-13 03:13:38.122761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.286 03:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.286 03:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:32.286 03:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:32.286 03:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:32.286 03:13:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:32.852 [2024-07-13 03:13:39.044277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:32.852 03:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:32.852 03:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.110 nvme0n1 00:24:33.110 03:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:33.110 03:13:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:33.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:33.110 Zero copy mechanism will not be used. 00:24:33.110 Running I/O for 2 seconds... 00:24:35.639 00:24:35.639 Latency(us) 00:24:35.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.639 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:35.639 nvme0n1 : 2.00 6357.81 794.73 0.00 0.00 2512.72 2144.81 9234.62 00:24:35.639 =================================================================================================================== 00:24:35.639 Total : 6357.81 794.73 0.00 0.00 2512.72 2144.81 9234.62 00:24:35.639 0 00:24:35.639 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:35.639 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:35.639 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:35.639 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:35.639 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:35.639 | select(.opcode=="crc32c") 00:24:35.639 | "\(.module_name) \(.executed)"' 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86107 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86107 ']' 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86107 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86107 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:35.640 killing process with pid 86107 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86107' 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86107 00:24:35.640 Received shutdown signal, test time was about 2.000000 seconds 00:24:35.640 00:24:35.640 Latency(us) 00:24:35.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.640 =================================================================================================================== 00:24:35.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.640 03:13:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86107 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86179 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86179 /var/tmp/bperf.sock 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86179 ']' 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.577 03:13:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.577 [2024-07-13 03:13:43.042400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:36.577 [2024-07-13 03:13:43.042595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86179 ] 00:24:36.836 [2024-07-13 03:13:43.210594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.095 [2024-07-13 03:13:43.380453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.663 03:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.663 03:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:37.663 03:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:37.663 03:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:37.663 03:13:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:37.922 [2024-07-13 03:13:44.301781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:37.922 03:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.922 03:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.488 nvme0n1 00:24:38.488 03:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:38.488 03:13:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:38.488 Running I/O for 2 seconds... 00:24:40.393 00:24:40.393 Latency(us) 00:24:40.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.393 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:40.393 nvme0n1 : 2.00 12546.85 49.01 0.00 0.00 10191.49 6225.92 23354.65 00:24:40.393 =================================================================================================================== 00:24:40.393 Total : 12546.85 49.01 0.00 0.00 10191.49 6225.92 23354.65 00:24:40.393 0 00:24:40.393 03:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:40.393 03:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:40.393 03:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:40.393 03:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:40.393 | select(.opcode=="crc32c") 00:24:40.393 | "\(.module_name) \(.executed)"' 00:24:40.393 03:13:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86179 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86179 ']' 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86179 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.652 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86179 00:24:40.911 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.911 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.911 killing process with pid 86179 00:24:40.911 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86179' 00:24:40.911 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86179 00:24:40.911 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.911 00:24:40.911 Latency(us) 00:24:40.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.911 =================================================================================================================== 00:24:40.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.911 03:13:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86179 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86250 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86250 /var/tmp/bperf.sock 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86250 ']' 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.873 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.874 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.874 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.874 03:13:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:41.874 [2024-07-13 03:13:48.341839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:41.874 [2024-07-13 03:13:48.342024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86250 ] 00:24:41.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:41.874 Zero copy mechanism will not be used. 00:24:42.132 [2024-07-13 03:13:48.517561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.391 [2024-07-13 03:13:48.713798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.957 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.957 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:42.957 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:42.957 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:42.957 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:43.525 [2024-07-13 03:13:49.749569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:43.525 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.525 03:13:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.784 nvme0n1 00:24:43.784 03:13:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:43.784 03:13:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.043 Zero copy mechanism will not be used. 00:24:44.043 Running I/O for 2 seconds... 00:24:45.950 00:24:45.950 Latency(us) 00:24:45.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.950 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:45.950 nvme0n1 : 2.00 4498.46 562.31 0.00 0.00 3546.43 3098.07 11021.96 00:24:45.950 =================================================================================================================== 00:24:45.950 Total : 4498.46 562.31 0.00 0.00 3546.43 3098.07 11021.96 00:24:45.950 0 00:24:45.950 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:45.950 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:45.950 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:45.950 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:45.950 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:45.950 | select(.opcode=="crc32c") 00:24:45.950 | "\(.module_name) \(.executed)"' 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86250 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86250 ']' 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86250 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86250 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:46.209 killing process with pid 86250 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86250' 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86250 00:24:46.209 Received shutdown signal, test time was about 2.000000 seconds 00:24:46.209 00:24:46.209 Latency(us) 00:24:46.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.209 =================================================================================================================== 00:24:46.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.209 03:13:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86250 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86007 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86007 ']' 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86007 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86007 00:24:47.584 killing process with pid 86007 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86007' 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86007 00:24:47.584 03:13:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86007 00:24:48.962 ************************************ 00:24:48.962 END TEST nvmf_digest_clean 00:24:48.962 ************************************ 00:24:48.962 00:24:48.962 real 0m24.117s 00:24:48.962 user 0m45.949s 00:24:48.962 sys 0m4.728s 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:48.962 ************************************ 00:24:48.962 START TEST nvmf_digest_error 00:24:48.962 ************************************ 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=86359 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 86359 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86359 ']' 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.962 03:13:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.962 [2024-07-13 03:13:55.319025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:48.962 [2024-07-13 03:13:55.320249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.222 [2024-07-13 03:13:55.507849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.481 [2024-07-13 03:13:55.742589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.481 [2024-07-13 03:13:55.742654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.481 [2024-07-13 03:13:55.742688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.481 [2024-07-13 03:13:55.742701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.481 [2024-07-13 03:13:55.742713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.481 [2024-07-13 03:13:55.742765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.740 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.740 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:49.740 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.740 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.740 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.999 [2024-07-13 03:13:56.267867] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.999 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.999 [2024-07-13 03:13:56.472258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:50.258 null0 00:24:50.258 [2024-07-13 03:13:56.588227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.258 [2024-07-13 03:13:56.612449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86391 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86391 /var/tmp/bperf.sock 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86391 ']' 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.258 03:13:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:50.258 [2024-07-13 03:13:56.714491] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:50.258 [2024-07-13 03:13:56.714900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86391 ] 00:24:50.517 [2024-07-13 03:13:56.883413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.776 [2024-07-13 03:13:57.110703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.035 [2024-07-13 03:13:57.302961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:51.294 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.294 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:51.294 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:51.294 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.554 03:13:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.813 nvme0n1 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:51.813 03:13:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.813 Running I/O for 2 seconds... 00:24:51.813 [2024-07-13 03:13:58.304030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:51.813 [2024-07-13 03:13:58.304129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.813 [2024-07-13 03:13:58.304160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.326484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.326543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.326581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.348930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.349054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.349082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.371164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.371229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.371251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.393252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.393306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.393361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.415585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.415672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.415708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.438372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.438423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.438447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.461802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.461866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.461907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.484438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.484510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.484536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.073 [2024-07-13 03:13:58.507575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.073 [2024-07-13 03:13:58.507641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.073 [2024-07-13 03:13:58.507664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.074 [2024-07-13 03:13:58.531009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.074 [2024-07-13 03:13:58.531061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.074 [2024-07-13 03:13:58.531089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.074 [2024-07-13 03:13:58.554167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.074 [2024-07-13 03:13:58.554228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.074 [2024-07-13 03:13:58.554251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.332 [2024-07-13 03:13:58.577376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.332 [2024-07-13 03:13:58.577461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.332 [2024-07-13 03:13:58.577485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.332 [2024-07-13 03:13:58.600391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.332 [2024-07-13 03:13:58.600454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.332 [2024-07-13 03:13:58.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.332 [2024-07-13 03:13:58.623684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.332 [2024-07-13 03:13:58.623754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.332 [2024-07-13 03:13:58.623780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.332 [2024-07-13 03:13:58.646440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.646503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.646526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.669046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.669127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.691799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.691861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.691902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.714352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.714407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.714433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.736499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.736585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.759453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.759503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.759526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.782103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.782161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.782183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.333 [2024-07-13 03:13:58.804833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.333 [2024-07-13 03:13:58.804899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.333 [2024-07-13 03:13:58.804941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.591 [2024-07-13 03:13:58.827378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.591 [2024-07-13 03:13:58.827439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.591 [2024-07-13 03:13:58.827461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.850540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.850608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.850632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.873345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.873406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.873429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.895835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.895932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.895962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.918527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.918585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.918606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.941274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.941357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.941382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.964341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.964398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.964419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:58.986924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:58.986987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:58.987062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:59.009845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:59.009963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:59.009985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:59.032435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:59.032496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:59.032537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:59.055712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:59.055784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:59.055806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.592 [2024-07-13 03:13:59.078206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.592 [2024-07-13 03:13:59.078258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.592 [2024-07-13 03:13:59.078313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.101272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.101334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.101357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.123673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.123740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.123764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.146990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.147064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.147087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.169763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.169819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.169861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.192131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.192225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.192247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.214338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.214434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.214457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.238538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.238608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.238633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.260955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.261055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.261078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.283491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.283575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.283601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.306734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.306824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:52.850 [2024-07-13 03:13:59.329897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:52.850 [2024-07-13 03:13:59.329991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.850 [2024-07-13 03:13:59.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.352667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.352724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.352746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.374903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.374978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.375018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.397213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.397271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.397293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.419484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.419550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.419574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.442000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.442094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.464331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.464414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.464439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.487056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.487132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.509552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.509617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.509642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.531790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.531861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.531899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.554234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.554334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.554359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.576774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.576848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.576870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.108 [2024-07-13 03:13:59.599571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.108 [2024-07-13 03:13:59.599651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.108 [2024-07-13 03:13:59.599675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.622759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.622819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.622840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.645435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.645489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.645513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.668752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.668811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.668832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.691108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.691163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.691189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.713433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.713504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.713557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.746158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.746218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.746241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.768746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.768820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.791963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.792095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.815848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.815916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.815943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.367 [2024-07-13 03:13:59.839033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.367 [2024-07-13 03:13:59.839094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.367 [2024-07-13 03:13:59.839117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.862206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.862276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.862302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.885132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.885194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.885216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.908160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.908228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.908254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.931324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.931385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.954164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.954215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.954239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:13:59.977152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:13:59.977211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:13:59.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.000365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.000424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.000453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.022462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.022534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.022558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.046721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.069612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.069702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.092407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.092459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.092499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.627 [2024-07-13 03:14:00.115491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.627 [2024-07-13 03:14:00.115569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.627 [2024-07-13 03:14:00.115592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.138309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.138374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.138417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.161764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.161839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.161860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.184736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.184832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.206871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.206996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.229434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.229485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.229511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.251931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.252047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.252084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 [2024-07-13 03:14:00.273989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:53.886 [2024-07-13 03:14:00.274037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.886 [2024-07-13 03:14:00.274060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.886 00:24:53.886 Latency(us) 00:24:53.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.886 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:53.886 nvme0n1 : 2.01 11090.07 43.32 0.00 0.00 11531.71 10604.92 44087.85 00:24:53.886 =================================================================================================================== 00:24:53.886 Total : 11090.07 43.32 0.00 0.00 11531.71 10604.92 44087.85 00:24:53.886 0 00:24:53.886 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:53.886 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:53.886 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:53.886 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:53.886 | .driver_specific 00:24:53.886 | .nvme_error 00:24:53.886 | .status_code 00:24:53.886 | .command_transient_transport_error' 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 87 > 0 )) 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86391 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86391 ']' 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86391 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86391 00:24:54.145 killing process with pid 86391 00:24:54.145 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.145 00:24:54.145 Latency(us) 00:24:54.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.145 =================================================================================================================== 00:24:54.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86391' 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86391 00:24:54.145 03:14:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86391 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86458 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86458 /var/tmp/bperf.sock 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86458 ']' 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.519 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.520 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.520 03:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:55.520 [2024-07-13 03:14:01.785911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:55.520 [2024-07-13 03:14:01.786344] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86458 ] 00:24:55.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:55.520 Zero copy mechanism will not be used. 00:24:55.520 [2024-07-13 03:14:01.959710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.777 [2024-07-13 03:14:02.155052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.035 [2024-07-13 03:14:02.346993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:56.306 03:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.306 03:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:56.306 03:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.306 03:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.573 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.137 nvme0n1 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:57.137 03:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.137 Zero copy mechanism will not be used. 00:24:57.137 Running I/O for 2 seconds... 00:24:57.137 [2024-07-13 03:14:03.496513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.496608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.496636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.502518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.502587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.502617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.508319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.508387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.508426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.514485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.514594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.520427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.520486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.520509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.526154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.526268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.532226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.532293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.532348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.538054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.538105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.544080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.544174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.544217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.550110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.550185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.550208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.556306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.556365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.137 [2024-07-13 03:14:03.556388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.137 [2024-07-13 03:14:03.562003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.137 [2024-07-13 03:14:03.562070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.567802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.567855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.567879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.573662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.573729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.573755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.579493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.579567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.579589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.585249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.585309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.585347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.591043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.591108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.591133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.596812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.596862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.596903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.602517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.602591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.602629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.608315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.608395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.608434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.614166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.614238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.614292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.619839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.619902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.138 [2024-07-13 03:14:03.625941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.138 [2024-07-13 03:14:03.626013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.138 [2024-07-13 03:14:03.626041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.632100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.632203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.638195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.638271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.638294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.644073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.644201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.650158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.650212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.650250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.656220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.656273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.656300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.662201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.662277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.662315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.667971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.668045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.668068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.673799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.673874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.673914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.679658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.679714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.685734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.685845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.685884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.691915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.691984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.692008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.698198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.698260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.698285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.704111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.704171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.704194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.710043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.710122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.716178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.716263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.722001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.722063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.722087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.728111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.728211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.733939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.734034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.734061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.739880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.739994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.740025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.745934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.745998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.746025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.751835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.751918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.751959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.757708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.757769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.757793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.763726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.763811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.763837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.769780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.769835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.769861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.775770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.775823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.775848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.781583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.781675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.781698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.787392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.787455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.787479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.793441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.793509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.793538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.799625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.799684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.799710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.396 [2024-07-13 03:14:03.805562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.396 [2024-07-13 03:14:03.805646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.396 [2024-07-13 03:14:03.805672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.811429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.811523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.811563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.817506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.817599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.817623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.823610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.823692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.823717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.829638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.829708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.835431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.835513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.835553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.841472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.841544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.841583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.847571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.847650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.847674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.853616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.853670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.853697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.859828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.859895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.859923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.865641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.865722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.865749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.871539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.871600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.877590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.877652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.877682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.397 [2024-07-13 03:14:03.883596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.397 [2024-07-13 03:14:03.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.397 [2024-07-13 03:14:03.883670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.889548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.889605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.889630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.895635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.895702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.895743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.901958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.902074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.902099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.908124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.908201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.908224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.913902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.913983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.914010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.919820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.919872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.919944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.925945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.925996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.926040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.931754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.931831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.931854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.937810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.937920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.943614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.943721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.949515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.949596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.949651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.955637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.955690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.955716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.961539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.961612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.961634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.967476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.967551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.967573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.973432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.973482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.973506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.979366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.979418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.979445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.985094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.985155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.985178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.991321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.991400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.991426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:03.997169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:03.997230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:03.997253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.003111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.003185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.008559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.008652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.014426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.014530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.014552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.020369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.020443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.026091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.026163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.026186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.031825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.031893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.031963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.037773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.037827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.037852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.043780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.043858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.043896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.049642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.049714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.049736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.055884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.055952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.055974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.061678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.061768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.067297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.067366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.067393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.073094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.073155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.073179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.078712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.078785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.084513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.656 [2024-07-13 03:14:04.084581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.656 [2024-07-13 03:14:04.084623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.656 [2024-07-13 03:14:04.090347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.090416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.090441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.096495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.096560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.096602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.102237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.102332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.108286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.108341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.108362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.113825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.113904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.113929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.119605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.119670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.119710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.125520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.125593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.125616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.131099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.131179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.136813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.136872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.136907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.657 [2024-07-13 03:14:04.142570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.657 [2024-07-13 03:14:04.142637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.657 [2024-07-13 03:14:04.142664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.148830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.148899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.148926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.154864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.154969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.155007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.160856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.160960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.166641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.166699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.166720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.172238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.172337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.172362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.178164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.178231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.183834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.183926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.189841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.189930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.189968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.195757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.195808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.195848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.201494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.201561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.201586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.207428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.207495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.207521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.213340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.213421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.213442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.218975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.219043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.219064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.224691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.224758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.224779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.230390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.230456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.230476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.236223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.236288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.236309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.242222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.242274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.242295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.248180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.248235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.248256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.254289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.254393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.260308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.260360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.260381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.266119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.266194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.271755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.271822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.271859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.277602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.277652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.277688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.283551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.283601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.283621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.289080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.289134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.294662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.294730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.294752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.300476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.300526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.300561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.306261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.306311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.306332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.312096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.312145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.312165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.318325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.318391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.318427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.324222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.324288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.324309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.330129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.330181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.330202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.335850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.335910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.341760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.341809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.347276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.347343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.347365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.353075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.353128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.353148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.359231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.359316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.359385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.365300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.371172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.371222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.371274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.376829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.376920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.376943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.382561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.382612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.382632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.388572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.388673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.388696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.394423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.394477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.394498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.400401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.400456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.400478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.915 [2024-07-13 03:14:04.406613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:57.915 [2024-07-13 03:14:04.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.915 [2024-07-13 03:14:04.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.412440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.412495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.412518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.418329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.418383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.424269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.424340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.424361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.430189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.430238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.430291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.436375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.436483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.442322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.442442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.448306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.448373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.454388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.454441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.454462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.460432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.460497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.460518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.466498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.466550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.466571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.472321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.472390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.472412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.478190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.478239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.478274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.484173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.484240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.484260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.490144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.490195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.490214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.496070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.496121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.496140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.501986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.502037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.502057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.507899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.507961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.507982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.513565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.513614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.513633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.519611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.519707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.519745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.525610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.525692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.525729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.532034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.532097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.532119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.537953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.538018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.538039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.543561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.543628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.543695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.549597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.549647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.549667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.555293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.555359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.555396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.561137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.561191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.561212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.566923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.174 [2024-07-13 03:14:04.567031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.174 [2024-07-13 03:14:04.567053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.174 [2024-07-13 03:14:04.572777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.572829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.572866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.578481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.578547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.578568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.584253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.584353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.584375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.590232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.590280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.590301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.595867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.595927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.601650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.601736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.607329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.607396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.607417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.613080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.613134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.613155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.619231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.619301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.619322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.625057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.625112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.631042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.631126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.631147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.636599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.636683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.642531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.642614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.648167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.648217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.648237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.654083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.654134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.654153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.175 [2024-07-13 03:14:04.659743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.175 [2024-07-13 03:14:04.659793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.175 [2024-07-13 03:14:04.659831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.666258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.666326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.666360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.672313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.672367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.672388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.678072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.678139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.678160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.683808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.683874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.683895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.689611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.689661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.689697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.695512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.695562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.695597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.701102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.701156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.701177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.706800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.706868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.706890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.712512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.712562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.712583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.718275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.718325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.718344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.723964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.724043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.724096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.729887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.729997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.730033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.736111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.736176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.736212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.742007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.742071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.742091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.747803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.747872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.747921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.753996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.754077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.754099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.759751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.759800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.759819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.765459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.765508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.765543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.771317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.771367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.771388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.776940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.777028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.777050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.782851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.782930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.782982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.788948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.789045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.795082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.795163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.800783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.800864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.800915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.807071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.807185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.813194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.813248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.813270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.819122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.819190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.825194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.825247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.825269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.831211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.831263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.831284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.836854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.836955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.837004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.843003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.843102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.849039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.849113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.854843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.854925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.854962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.860856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.860934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.860955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.867000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.867081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.867117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.872780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.872830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.872851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.878735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.878824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.884839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.884917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.884962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.434 [2024-07-13 03:14:04.890725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.434 [2024-07-13 03:14:04.890774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.434 [2024-07-13 03:14:04.890809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.435 [2024-07-13 03:14:04.896609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.435 [2024-07-13 03:14:04.896658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.435 [2024-07-13 03:14:04.896710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.435 [2024-07-13 03:14:04.902659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.435 [2024-07-13 03:14:04.902709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.435 [2024-07-13 03:14:04.902730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.435 [2024-07-13 03:14:04.908480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.435 [2024-07-13 03:14:04.908530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.435 [2024-07-13 03:14:04.908551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.435 [2024-07-13 03:14:04.914422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.435 [2024-07-13 03:14:04.914507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.435 [2024-07-13 03:14:04.914529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.435 [2024-07-13 03:14:04.920744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.435 [2024-07-13 03:14:04.920800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.435 [2024-07-13 03:14:04.920821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.926891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.926960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.926983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.933122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.933199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.939108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.939162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.939184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.945311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.951465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.951532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.951570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.957290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.957344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.957366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.963216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.963297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.963341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.969189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.969253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.969275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.975157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.975224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.975275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.981175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.981228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.981249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.987021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.987086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.987108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.992943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.993018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.993045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:04.998900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:04.998948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:04.998969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.004452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.004573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.010716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.010797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.010819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.017148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.017201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.017222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.023160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.023214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.023236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.029195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.029253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.029274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.035143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.035212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.035234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.693 [2024-07-13 03:14:05.041035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.693 [2024-07-13 03:14:05.041091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.693 [2024-07-13 03:14:05.041114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.047011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.047079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.047114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.053025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.053079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.058550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.058603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.058624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.064579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.064633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.064656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.070517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.070570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.070591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.076475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.076528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.076550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.082473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.082540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.082578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.088192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.088246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.088268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.093865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.093946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.099559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.099631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.105342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.105396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.105438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.110872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.110938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.110962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.116634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.116688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.116710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.122344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.122397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.122420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.127947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.128017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.133827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.133908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.133931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.139597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.139668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.139690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.145354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.145421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.145442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.150988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.151038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.151059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.156734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.156787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.162407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.162462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.162484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.168101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.168153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.168175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.173733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.173787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.173809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.694 [2024-07-13 03:14:05.179344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.694 [2024-07-13 03:14:05.179394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.694 [2024-07-13 03:14:05.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.185250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.185328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.191071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.191141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.191163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.196721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.196810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.196833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.202542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.202596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.202618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.208163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.208216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.208238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.214077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.214147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.952 [2024-07-13 03:14:05.214169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.952 [2024-07-13 03:14:05.220024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.952 [2024-07-13 03:14:05.220107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.220129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.226126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.226178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.226214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.231996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.232048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.232069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.238192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.238245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.244103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.244208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.250411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.250463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.250485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.256431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.256482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.256520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.262443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.262517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.268521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.268574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.268595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.274581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.274626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.274646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.280389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.280441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.280462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.286363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.286413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.286434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.292458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.292513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.292546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.298478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.298531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.298553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.304387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.304455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.304477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.310143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.310194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.310214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.315944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.316005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.316026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.321941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.322007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.322028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.327711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.327794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.327816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.333736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.333803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.333824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.339473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.339557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.339592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.345697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.345762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.345799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.351563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.351629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.351650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.357585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.357651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.357671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.363684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.363750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.363769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.369508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.369574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.369595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.375289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.375339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.375359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.381378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.381476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.381497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.387414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.953 [2024-07-13 03:14:05.387496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.953 [2024-07-13 03:14:05.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.953 [2024-07-13 03:14:05.393273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.393326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.393347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.399362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.399430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.405183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.405235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.405257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.411007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.411072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.411092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.416940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.417014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.417036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.422904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.422965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.422985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.429001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.429054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.429076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.434944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.435039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.435075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.954 [2024-07-13 03:14:05.441107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:58.954 [2024-07-13 03:14:05.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.954 [2024-07-13 03:14:05.441189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.447200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.447270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.447292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.453319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.453375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.453397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.459170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.459220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.459240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.464846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.464941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.464962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.470828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.470927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.470948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.476764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.476814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.476850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.212 [2024-07-13 03:14:05.482572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:59.212 [2024-07-13 03:14:05.482640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.212 [2024-07-13 03:14:05.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.212 00:24:59.212 Latency(us) 00:24:59.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.212 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:59.212 nvme0n1 : 2.00 5224.34 653.04 0.00 0.00 3057.62 2532.07 11021.96 00:24:59.212 =================================================================================================================== 00:24:59.212 Total : 5224.34 653.04 0.00 0.00 3057.62 2532.07 11021.96 00:24:59.212 0 00:24:59.212 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:59.212 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:59.212 | .driver_specific 00:24:59.212 | .nvme_error 00:24:59.212 | .status_code 00:24:59.212 | .command_transient_transport_error' 00:24:59.212 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:59.212 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:59.470 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 337 > 0 )) 00:24:59.470 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86458 00:24:59.470 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86458 ']' 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86458 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86458 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:59.471 killing process with pid 86458 00:24:59.471 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.471 00:24:59.471 Latency(us) 00:24:59.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.471 =================================================================================================================== 00:24:59.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86458' 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86458 00:24:59.471 03:14:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86458 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86525 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86525 /var/tmp/bperf.sock 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86525 ']' 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.846 03:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.846 [2024-07-13 03:14:07.065231] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:00.846 [2024-07-13 03:14:07.065408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86525 ] 00:25:00.846 [2024-07-13 03:14:07.239389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.105 [2024-07-13 03:14:07.433145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.364 [2024-07-13 03:14:07.629205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:01.622 03:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.622 03:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:01.622 03:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.622 03:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.881 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.139 nvme0n1 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:02.139 03:14:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.398 Running I/O for 2 seconds... 00:25:02.398 [2024-07-13 03:14:08.719468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:25:02.398 [2024-07-13 03:14:08.722905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.723023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.740643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:25:02.398 [2024-07-13 03:14:08.744021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.744103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.761877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:25:02.398 [2024-07-13 03:14:08.765177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.783208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:25:02.398 [2024-07-13 03:14:08.786517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.786598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.804449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:25:02.398 [2024-07-13 03:14:08.807788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.807854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.825352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:25:02.398 [2024-07-13 03:14:08.828614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.828679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.847048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:25:02.398 [2024-07-13 03:14:08.850269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.850322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.398 [2024-07-13 03:14:08.869322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:25:02.398 [2024-07-13 03:14:08.872592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.398 [2024-07-13 03:14:08.872674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.891073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:25:02.657 [2024-07-13 03:14:08.894174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:08.894271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.913084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:25:02.657 [2024-07-13 03:14:08.916375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:08.916471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.934890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:25:02.657 [2024-07-13 03:14:08.937936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:08.937987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.955401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:25:02.657 [2024-07-13 03:14:08.958657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:08.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.977376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:25:02.657 [2024-07-13 03:14:08.980651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:08.980716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:08.999493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:25:02.657 [2024-07-13 03:14:09.002758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.002807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.020780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:25:02.657 [2024-07-13 03:14:09.023971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.024059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.042457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:25:02.657 [2024-07-13 03:14:09.045341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.045404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.064866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:25:02.657 [2024-07-13 03:14:09.067894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.067953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.085569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:25:02.657 [2024-07-13 03:14:09.088610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.088659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.106496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:25:02.657 [2024-07-13 03:14:09.109279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.657 [2024-07-13 03:14:09.128295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:25:02.657 [2024-07-13 03:14:09.131251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.657 [2024-07-13 03:14:09.131330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.149710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:25:02.915 [2024-07-13 03:14:09.152559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.152636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.171038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:25:02.915 [2024-07-13 03:14:09.173904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.192388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:25:02.915 [2024-07-13 03:14:09.195232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.195280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.213654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:25:02.915 [2024-07-13 03:14:09.216698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.216747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.234987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:25:02.915 [2024-07-13 03:14:09.237748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.237794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.255628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:25:02.915 [2024-07-13 03:14:09.258553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.258602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.276668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:25:02.915 [2024-07-13 03:14:09.279398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.279459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.297793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:25:02.915 [2024-07-13 03:14:09.300391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.300456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.318778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:25:02.915 [2024-07-13 03:14:09.321479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.321541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.339815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:25:02.915 [2024-07-13 03:14:09.342350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.342414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.361041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:25:02.915 [2024-07-13 03:14:09.363745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.363794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.382134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:25:02.915 [2024-07-13 03:14:09.384759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.384836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.915 [2024-07-13 03:14:09.403203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:25:02.915 [2024-07-13 03:14:09.405873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.915 [2024-07-13 03:14:09.405951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.424642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:25:03.172 [2024-07-13 03:14:09.427068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.427131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.445874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:25:03.172 [2024-07-13 03:14:09.448592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.448640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.467549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:25:03.172 [2024-07-13 03:14:09.470330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.470381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.488937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:25:03.172 [2024-07-13 03:14:09.491430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.491477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.509980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:25:03.172 [2024-07-13 03:14:09.512518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.531386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:25:03.172 [2024-07-13 03:14:09.533763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.552643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:25:03.172 [2024-07-13 03:14:09.554964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.555021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.573165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:25:03.172 [2024-07-13 03:14:09.575548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.575594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.594228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:25:03.172 [2024-07-13 03:14:09.596572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.596635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.614888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:25:03.172 [2024-07-13 03:14:09.617121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.172 [2024-07-13 03:14:09.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.172 [2024-07-13 03:14:09.635983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:25:03.173 [2024-07-13 03:14:09.638375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.173 [2024-07-13 03:14:09.638423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.173 [2024-07-13 03:14:09.657437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:25:03.173 [2024-07-13 03:14:09.659740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.173 [2024-07-13 03:14:09.659786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.678784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:25:03.430 [2024-07-13 03:14:09.681184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.681235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.699710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:25:03.430 [2024-07-13 03:14:09.701935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.701993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.720323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:25:03.430 [2024-07-13 03:14:09.722544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.722607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.741573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:25:03.430 [2024-07-13 03:14:09.743692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.743769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.763089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:25:03.430 [2024-07-13 03:14:09.765320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.765370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.784087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:25:03.430 [2024-07-13 03:14:09.786110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.786206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.805279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:25:03.430 [2024-07-13 03:14:09.807401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.807463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.825990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:25:03.430 [2024-07-13 03:14:09.828061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.828126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.846679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:25:03.430 [2024-07-13 03:14:09.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.430 [2024-07-13 03:14:09.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.430 [2024-07-13 03:14:09.867637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:25:03.430 [2024-07-13 03:14:09.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.431 [2024-07-13 03:14:09.869834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.431 [2024-07-13 03:14:09.888944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:25:03.431 [2024-07-13 03:14:09.890859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.431 [2024-07-13 03:14:09.890931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.431 [2024-07-13 03:14:09.909865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:25:03.431 [2024-07-13 03:14:09.911788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.431 [2024-07-13 03:14:09.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:09.931767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:25:03.689 [2024-07-13 03:14:09.933724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:09.933790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:09.953058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:25:03.689 [2024-07-13 03:14:09.954842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:09.954914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:09.974315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:25:03.689 [2024-07-13 03:14:09.976137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:09.976201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:09.995681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:25:03.689 [2024-07-13 03:14:09.997698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:09.997745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.017639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:25:03.689 [2024-07-13 03:14:10.019341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.019393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.040342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:25:03.689 [2024-07-13 03:14:10.042097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.042166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.060813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:25:03.689 [2024-07-13 03:14:10.062540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.062620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.090502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:25:03.689 [2024-07-13 03:14:10.093909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.093988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.111395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:25:03.689 [2024-07-13 03:14:10.114688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.114761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.133006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:25:03.689 [2024-07-13 03:14:10.136262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.136320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.154165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:25:03.689 [2024-07-13 03:14:10.157388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.157449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.689 [2024-07-13 03:14:10.176230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:25:03.689 [2024-07-13 03:14:10.179631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.689 [2024-07-13 03:14:10.179692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.198042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:25:03.947 [2024-07-13 03:14:10.201309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.201399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.219886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:25:03.947 [2024-07-13 03:14:10.222944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.223002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.240917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:25:03.947 [2024-07-13 03:14:10.243920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.243979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.262214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:25:03.947 [2024-07-13 03:14:10.265418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.265477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.284090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:25:03.947 [2024-07-13 03:14:10.287290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.287378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.306333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:25:03.947 [2024-07-13 03:14:10.309573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.309661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.328318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:25:03.947 [2024-07-13 03:14:10.331270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.331327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.348267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:25:03.947 [2024-07-13 03:14:10.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.351432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.370162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:25:03.947 [2024-07-13 03:14:10.373009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.373067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.390404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:25:03.947 [2024-07-13 03:14:10.393387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.393450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.412221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:25:03.947 [2024-07-13 03:14:10.415333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.415392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.947 [2024-07-13 03:14:10.434804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:25:03.947 [2024-07-13 03:14:10.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-07-13 03:14:10.437921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.456312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:25:04.205 [2024-07-13 03:14:10.459251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.459327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.477749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:25:04.205 [2024-07-13 03:14:10.480614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.480686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.499193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:25:04.205 [2024-07-13 03:14:10.502122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.502225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.520523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:25:04.205 [2024-07-13 03:14:10.523386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.523443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.542149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:25:04.205 [2024-07-13 03:14:10.544883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.544987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.563399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:25:04.205 [2024-07-13 03:14:10.566113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.566217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.584227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:25:04.205 [2024-07-13 03:14:10.586851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.586927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.604825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:25:04.205 [2024-07-13 03:14:10.607454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.607555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.626018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:25:04.205 [2024-07-13 03:14:10.628621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.628676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.646892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:25:04.205 [2024-07-13 03:14:10.649797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.205 [2024-07-13 03:14:10.649896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:04.205 [2024-07-13 03:14:10.668275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:25:04.206 [2024-07-13 03:14:10.670934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.206 [2024-07-13 03:14:10.671046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:04.206 [2024-07-13 03:14:10.689638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:25:04.206 [2024-07-13 03:14:10.692437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.206 [2024-07-13 03:14:10.692489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:04.206 00:25:04.206 Latency(us) 00:25:04.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.206 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:04.206 nvme0n1 : 2.01 11841.48 46.26 0.00 0.00 10797.73 9115.46 40751.48 00:25:04.206 =================================================================================================================== 00:25:04.206 Total : 11841.48 46.26 0.00 0.00 10797.73 9115.46 40751.48 00:25:04.464 0 00:25:04.464 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:04.464 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:04.464 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:04.464 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:04.464 | .driver_specific 00:25:04.464 | .nvme_error 00:25:04.464 | .status_code 00:25:04.464 | .command_transient_transport_error' 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 93 > 0 )) 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86525 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86525 ']' 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86525 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:04.722 03:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86525 00:25:04.722 03:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:04.722 killing process with pid 86525 00:25:04.722 03:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:04.722 03:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86525' 00:25:04.722 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.722 00:25:04.722 Latency(us) 00:25:04.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.722 =================================================================================================================== 00:25:04.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.722 03:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86525 00:25:04.722 03:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86525 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86592 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86592 /var/tmp/bperf.sock 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86592 ']' 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.656 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.657 03:14:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.915 [2024-07-13 03:14:12.206287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:05.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.915 Zero copy mechanism will not be used. 00:25:05.915 [2024-07-13 03:14:12.206482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86592 ] 00:25:05.915 [2024-07-13 03:14:12.380255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.173 [2024-07-13 03:14:12.575932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.432 [2024-07-13 03:14:12.766323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:06.690 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.690 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:06.690 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.690 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.948 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.207 nvme0n1 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:07.207 03:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.466 Zero copy mechanism will not be used. 00:25:07.466 Running I/O for 2 seconds... 00:25:07.466 [2024-07-13 03:14:13.796061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.466 [2024-07-13 03:14:13.796507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.466 [2024-07-13 03:14:13.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.466 [2024-07-13 03:14:13.803715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.466 [2024-07-13 03:14:13.804123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.466 [2024-07-13 03:14:13.804170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.466 [2024-07-13 03:14:13.811068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.466 [2024-07-13 03:14:13.811444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.466 [2024-07-13 03:14:13.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.466 [2024-07-13 03:14:13.818392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.466 [2024-07-13 03:14:13.818813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.466 [2024-07-13 03:14:13.818859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.825667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.826104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.826151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.832823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.833267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.833321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.839984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.840454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.840500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.847757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.848165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.848228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.854984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.855431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.855486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.862209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.862656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.869738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.870144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.870197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.876870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.877285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.884145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.884606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.884649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.891190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.891603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.891671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.898479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.898933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.899021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.905729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.906204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.906256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.912881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.913325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.913379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.919916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.920345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.920405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.927153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.927647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.934390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.934816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.941609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.942022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.942082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.948580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.949030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.949084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-13 03:14:13.955882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.467 [2024-07-13 03:14:13.956309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-13 03:14:13.956365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.963346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:13.963764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:13.963810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.970667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:13.971156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:13.971212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.977893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:13.978293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:13.978362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.985368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:13.985749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:13.985794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.992429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:13.992847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:13.992913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:13.999649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.000110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.006656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.007118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.007164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.014126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.014501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.014558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.021284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.021653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.021698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.028504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.028864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.028926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.035824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.036240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.036294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.043247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.043681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.051022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.051429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.051487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.058412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.058784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.058858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.065987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.066414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.073422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.073840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.080820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.081281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.081326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.088166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.088579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.088623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.095414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.095826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.095908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.102534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.102959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.103034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.109918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.110286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.110339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.117178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.117572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.117626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.124590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.124988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.125032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.132052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.132449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.132504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.139534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.139996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.147047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.147427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.147481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.154492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.154919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.154982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.161990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.162388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.162432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.169268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.169628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.169676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.176358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.726 [2024-07-13 03:14:14.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-13 03:14:14.176799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-13 03:14:14.183528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.727 [2024-07-13 03:14:14.183936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-13 03:14:14.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.727 [2024-07-13 03:14:14.190848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.727 [2024-07-13 03:14:14.191239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-13 03:14:14.191298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.727 [2024-07-13 03:14:14.198228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.727 [2024-07-13 03:14:14.198619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-13 03:14:14.198673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.727 [2024-07-13 03:14:14.205296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.727 [2024-07-13 03:14:14.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-13 03:14:14.205714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.727 [2024-07-13 03:14:14.212564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.727 [2024-07-13 03:14:14.212941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-13 03:14:14.213008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.219867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.220296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.220340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.227026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.227394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.227439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.234254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.234619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.234673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.241498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.241880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.241936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.248613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.249034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.255677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.256078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.262770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.263185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.263230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.269906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.270324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.270377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.277207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.277626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.284494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.284927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.284993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.291917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.292329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.292381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.299246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.299593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.299645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.306389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.306825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.313932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.314326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.314381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.321178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.321549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.321594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.328632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.329089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.329133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.335968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.336345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.336398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.343474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.343865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.343919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.350950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.351407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.351459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.358424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.986 [2024-07-13 03:14:14.358839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.986 [2024-07-13 03:14:14.358908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.986 [2024-07-13 03:14:14.366012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.366439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.366484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.373656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.374091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.374159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.381085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.381475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.381517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.388230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.388685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.388725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.395620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.396039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.396112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.403096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.403496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.403539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.410345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.410732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.410810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.417764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.418200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.425326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.425719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.425764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.432455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.432885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.433002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.439733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.440241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.440293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.447204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.447693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.447736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.454509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.454907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.454979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.462023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.462395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.462440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.469599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:07.987 [2024-07-13 03:14:14.470069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.987 [2024-07-13 03:14:14.470115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.987 [2024-07-13 03:14:14.477327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.246 [2024-07-13 03:14:14.477726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.246 [2024-07-13 03:14:14.477782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.246 [2024-07-13 03:14:14.485112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.246 [2024-07-13 03:14:14.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.246 [2024-07-13 03:14:14.485528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.246 [2024-07-13 03:14:14.492814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.246 [2024-07-13 03:14:14.493257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.246 [2024-07-13 03:14:14.493326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.246 [2024-07-13 03:14:14.500392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.246 [2024-07-13 03:14:14.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.246 [2024-07-13 03:14:14.500862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.246 [2024-07-13 03:14:14.507603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.508034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.508079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.514748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.515218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.515270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.522161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.522589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.522632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.529400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.529863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.529919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.536639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.537059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.537112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.544057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.544491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.544535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.551263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.551667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.551719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.558186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.558588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.558661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.565497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.565926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.565980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.572750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.573171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.573227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.579887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.580367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.580420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.587138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.587549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.587592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.594228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.594677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.594747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.601459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.601914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.601969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.608729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.609160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.609205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.616368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.616787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.623613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.624000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.624045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.630909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.631355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.631399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.638288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.638665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.638747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.645848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.646306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.653380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.653759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.653815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.660645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.661152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.661209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.668015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.668418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.668465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.675234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.675656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.675710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.682629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.683087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.689782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.690272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.690317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.697088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.697471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.697539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.704132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.704534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.704578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.247 [2024-07-13 03:14:14.711217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.247 [2024-07-13 03:14:14.711636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.247 [2024-07-13 03:14:14.711681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.248 [2024-07-13 03:14:14.718395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.248 [2024-07-13 03:14:14.718845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.248 [2024-07-13 03:14:14.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.248 [2024-07-13 03:14:14.725796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.248 [2024-07-13 03:14:14.726271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.248 [2024-07-13 03:14:14.726316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.248 [2024-07-13 03:14:14.733088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.248 [2024-07-13 03:14:14.733446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.248 [2024-07-13 03:14:14.733498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.740625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.741119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.747833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.748295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.748341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.755141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.755550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.755635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.762487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.762888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.762945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.770074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.770570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.777363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.777744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.785191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.785684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.792814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.793240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.793295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.799897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.800373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.800413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.807495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.807932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.807988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.814652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.815120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.821930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.822330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.822413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.507 [2024-07-13 03:14:14.829659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.507 [2024-07-13 03:14:14.830072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.507 [2024-07-13 03:14:14.830116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.836933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.837349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.844126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.844557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.844602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.851378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.851870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.851922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.858719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.859156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.859209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.865891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.866381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.866423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.873243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.873689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.880494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.880944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.881019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.887792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.888243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.888303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.895136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.895537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.895590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.902353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.902729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.902798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.909890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.910401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.910452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.917256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.917673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.917745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.924519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.924950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.925018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.931704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.932139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.932216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.939112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.939485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.939539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.946342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.946747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.946792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.953593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.954025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.954077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.960829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.961225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.961281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.967971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.968402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.975186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.975567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.975612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.982423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.982845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.989809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.990222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.990267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.508 [2024-07-13 03:14:14.997171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.508 [2024-07-13 03:14:14.997551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.508 [2024-07-13 03:14:14.997598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.004656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.005058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.005105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.012108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.012547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.019275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.026506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.026927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.026985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.033647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.034052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.034096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.040725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.041160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.041205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.768 [2024-07-13 03:14:15.048044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.768 [2024-07-13 03:14:15.048522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.768 [2024-07-13 03:14:15.048567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.055509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.055927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.055984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.062906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.063369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.063414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.070204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.070663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.070706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.077754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.078144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.078189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.084956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.085427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.092401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.092783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.092828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.099954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.100412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.100455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.107054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.107470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.107515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.114348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.114751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.114796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.121601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.122053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.122097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.129012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.129370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.129413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.136533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.136948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.137017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.144043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.144468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.151512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.151929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.151984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.159159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.159632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.159677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.166699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.167108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.167164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.173913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.174460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.181245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.181664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.181708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.188562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.189000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.189044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.195919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.196399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.196458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.203558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.204030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.204075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.210798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.211291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.211336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.218352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.218744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.218792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.225710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.226142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.226188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.233052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.233418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.233463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.240345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.769 [2024-07-13 03:14:15.240753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.769 [2024-07-13 03:14:15.240798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.769 [2024-07-13 03:14:15.247800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.770 [2024-07-13 03:14:15.248256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.770 [2024-07-13 03:14:15.248299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.770 [2024-07-13 03:14:15.255282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:08.770 [2024-07-13 03:14:15.255705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.770 [2024-07-13 03:14:15.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.262911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.263389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.263435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.270245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.270674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.270733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.277786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.278186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.278231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.285221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.285679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.292699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.293149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.293195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.300015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.300373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.300418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.307228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.307619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.307664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.314695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.315076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.315121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.322325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.029 [2024-07-13 03:14:15.322686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.029 [2024-07-13 03:14:15.322731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.029 [2024-07-13 03:14:15.330008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.330396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.330440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.337797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.338232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.345274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.345685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.352574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.352975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.353021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.359660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.360047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.360092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.367241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.367648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.367693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.374553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.374942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.374998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.381937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.382323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.382368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.389314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.389716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.396944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.397337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.397381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.404449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.404896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.404950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.411778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.412158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.412202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.419224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.419621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.419665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.426741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.427122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.433936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.434310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.440866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.441265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.441319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.447618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.447995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.448040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.454772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.455172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.455216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.462109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.462470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.462514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.468925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.469292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.476053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.476409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.476459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.482879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.483249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.483294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.489741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.490116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.490156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.496876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.497262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.497307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.504134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.504496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.504539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.511365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.511730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.511774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.030 [2024-07-13 03:14:15.518614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.030 [2024-07-13 03:14:15.519037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.030 [2024-07-13 03:14:15.519095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.525961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.526352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.526399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.533085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.533443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.533488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.540374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.540788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.540833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.548088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.548560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.555358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.555719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.562953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.563374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.563418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.570567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.570942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.578439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.578813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.578858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.585878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.586314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.586360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.593437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.593859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.593917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.600638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.601024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.601071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.608534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.608907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.616329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.616817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.624342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.624728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.624772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.631720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.632092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.632136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.290 [2024-07-13 03:14:15.638759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.290 [2024-07-13 03:14:15.639140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.290 [2024-07-13 03:14:15.639185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.645707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.646080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.646124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.652656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.653076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.653122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.660374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.660730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.660774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.667965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.668385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.668430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.675845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.676273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.676317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.683717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.684127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.691317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.691677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.691721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.698794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.699229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.706431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.706838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.706896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.714033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.714409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.714453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.721171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.721534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.721578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.728916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.729329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.729374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.736428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.736830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.736875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.744154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.744556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.744600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.751326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.751839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.758630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.759074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.759119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.766055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.766492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.766540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.773245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.773667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.773711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.291 [2024-07-13 03:14:15.781204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.291 [2024-07-13 03:14:15.781594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.291 [2024-07-13 03:14:15.781645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.550 [2024-07-13 03:14:15.788270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:25:09.550 [2024-07-13 03:14:15.788657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.550 [2024-07-13 03:14:15.788703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.550 00:25:09.550 Latency(us) 00:25:09.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.550 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:09.550 nvme0n1 : 2.00 4213.57 526.70 0.00 0.00 3786.34 2144.81 7983.48 00:25:09.550 =================================================================================================================== 00:25:09.550 Total : 4213.57 526.70 0.00 0.00 3786.34 2144.81 7983.48 00:25:09.550 0 00:25:09.550 03:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:09.550 03:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:09.550 03:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:09.550 03:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:09.550 | .driver_specific 00:25:09.550 | .nvme_error 00:25:09.550 | .status_code 00:25:09.550 | .command_transient_transport_error' 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86592 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86592 ']' 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86592 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86592 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:09.837 killing process with pid 86592 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86592' 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86592 00:25:09.837 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.837 00:25:09.837 Latency(us) 00:25:09.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.837 =================================================================================================================== 00:25:09.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.837 03:14:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86592 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86359 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86359 ']' 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86359 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86359 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.241 killing process with pid 86359 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86359' 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86359 00:25:11.241 03:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86359 00:25:12.178 00:25:12.179 real 0m23.381s 00:25:12.179 user 0m44.377s 00:25:12.179 sys 0m4.697s 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.179 ************************************ 00:25:12.179 END TEST nvmf_digest_error 00:25:12.179 ************************************ 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.179 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.179 rmmod nvme_tcp 00:25:12.438 rmmod nvme_fabrics 00:25:12.438 rmmod nvme_keyring 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 86359 ']' 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 86359 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 86359 ']' 00:25:12.438 Process with pid 86359 is not found 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 86359 00:25:12.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86359) - No such process 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 86359 is not found' 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:12.438 00:25:12.438 real 0m48.213s 00:25:12.438 user 1m30.488s 00:25:12.438 sys 0m9.747s 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.438 ************************************ 00:25:12.438 03:14:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.438 END TEST nvmf_digest 00:25:12.438 ************************************ 00:25:12.438 03:14:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:12.438 03:14:18 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:12.438 03:14:18 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:25:12.438 03:14:18 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:12.438 03:14:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:12.438 03:14:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.438 03:14:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.438 ************************************ 00:25:12.438 START TEST nvmf_host_multipath 00:25:12.438 ************************************ 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:12.438 * Looking for test storage... 00:25:12.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.438 03:14:18 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.439 03:14:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:12.698 Cannot find device "nvmf_tgt_br" 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:12.698 Cannot find device "nvmf_tgt_br2" 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:12.698 Cannot find device "nvmf_tgt_br" 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:25:12.698 03:14:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:12.698 Cannot find device "nvmf_tgt_br2" 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:12.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:12.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:12.698 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:12.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:25:12.958 00:25:12.958 --- 10.0.0.2 ping statistics --- 00:25:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.958 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:12.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:12.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:12.958 00:25:12.958 --- 10.0.0.3 ping statistics --- 00:25:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.958 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:12.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:12.958 00:25:12.958 --- 10.0.0.1 ping statistics --- 00:25:12.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.958 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=86875 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 86875 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 86875 ']' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:12.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.958 03:14:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:12.958 [2024-07-13 03:14:19.430562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:12.958 [2024-07-13 03:14:19.430733] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.218 [2024-07-13 03:14:19.608310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.478 [2024-07-13 03:14:19.861466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.478 [2024-07-13 03:14:19.861561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.478 [2024-07-13 03:14:19.861589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.478 [2024-07-13 03:14:19.861607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.478 [2024-07-13 03:14:19.861621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.478 [2024-07-13 03:14:19.862592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.478 [2024-07-13 03:14:19.862637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.737 [2024-07-13 03:14:20.078590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86875 00:25:13.996 03:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.255 [2024-07-13 03:14:20.645509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.255 03:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.514 Malloc0 00:25:14.514 03:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:15.082 03:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.082 03:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.339 [2024-07-13 03:14:21.741748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.339 03:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.598 [2024-07-13 03:14:22.010091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86935 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86935 /var/tmp/bdevperf.sock 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 86935 ']' 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.598 03:14:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:16.970 03:14:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.970 03:14:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:16.970 03:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:16.970 03:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:17.227 Nvme0n1 00:25:17.227 03:14:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:17.792 Nvme0n1 00:25:17.792 03:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:17.792 03:14:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:18.727 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:18.727 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:18.984 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:19.241 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:19.241 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:19.241 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=86976 00:25:19.241 03:14:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:25.797 Attaching 4 probes... 00:25:25.797 @path[10.0.0.2, 4421]: 12764 00:25:25.797 @path[10.0.0.2, 4421]: 13035 00:25:25.797 @path[10.0.0.2, 4421]: 13107 00:25:25.797 @path[10.0.0.2, 4421]: 13112 00:25:25.797 @path[10.0.0.2, 4421]: 13083 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 86976 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:25.797 03:14:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.797 03:14:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:26.055 03:14:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:26.055 03:14:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87093 00:25:26.055 03:14:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:26.055 03:14:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:32.617 Attaching 4 probes... 00:25:32.617 @path[10.0.0.2, 4420]: 12592 00:25:32.617 @path[10.0.0.2, 4420]: 12719 00:25:32.617 @path[10.0.0.2, 4420]: 12674 00:25:32.617 @path[10.0.0.2, 4420]: 12901 00:25:32.617 @path[10.0.0.2, 4420]: 12897 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87093 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:32.617 03:14:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:32.876 03:14:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:32.876 03:14:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87207 00:25:32.876 03:14:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:32.876 03:14:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:39.437 Attaching 4 probes... 00:25:39.437 @path[10.0.0.2, 4421]: 9826 00:25:39.437 @path[10.0.0.2, 4421]: 12792 00:25:39.437 @path[10.0.0.2, 4421]: 12815 00:25:39.437 @path[10.0.0.2, 4421]: 12817 00:25:39.437 @path[10.0.0.2, 4421]: 12916 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87207 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:39.437 03:14:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:39.696 03:14:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:39.696 03:14:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87314 00:25:39.696 03:14:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:39.696 03:14:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.249 Attaching 4 probes... 00:25:46.249 00:25:46.249 00:25:46.249 00:25:46.249 00:25:46.249 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87314 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.249 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.508 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:46.508 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87427 00:25:46.508 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:46.508 03:14:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:53.109 03:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:53.109 03:14:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.109 Attaching 4 probes... 00:25:53.109 @path[10.0.0.2, 4421]: 14051 00:25:53.109 @path[10.0.0.2, 4421]: 14451 00:25:53.109 @path[10.0.0.2, 4421]: 15073 00:25:53.109 @path[10.0.0.2, 4421]: 15355 00:25:53.109 @path[10.0.0.2, 4421]: 15272 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:53.109 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87427 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:53.110 [2024-07-13 03:14:59.364624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:53.110 [2024-07-13 03:14:59.364693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:53.110 [2024-07-13 03:14:59.364711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:25:53.110 03:14:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:54.045 03:15:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:54.045 03:15:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87545 00:25:54.045 03:15:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:54.045 03:15:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.606 Attaching 4 probes... 00:26:00.606 @path[10.0.0.2, 4420]: 13862 00:26:00.606 @path[10.0.0.2, 4420]: 13876 00:26:00.606 @path[10.0.0.2, 4420]: 13826 00:26:00.606 @path[10.0.0.2, 4420]: 12824 00:26:00.606 @path[10.0.0.2, 4420]: 12407 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87545 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.606 03:15:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.606 [2024-07-13 03:15:06.990244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.606 03:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:00.864 03:15:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:07.424 03:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:07.424 03:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87721 00:26:07.424 03:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86875 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:07.424 03:15:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:14.034 Attaching 4 probes... 00:26:14.034 @path[10.0.0.2, 4421]: 12373 00:26:14.034 @path[10.0.0.2, 4421]: 12316 00:26:14.034 @path[10.0.0.2, 4421]: 12644 00:26:14.034 @path[10.0.0.2, 4421]: 12532 00:26:14.034 @path[10.0.0.2, 4421]: 12548 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87721 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86935 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 86935 ']' 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 86935 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86935 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86935' 00:26:14.034 killing process with pid 86935 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 86935 00:26:14.034 03:15:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 86935 00:26:14.034 Connection closed with partial response: 00:26:14.034 00:26:14.034 00:26:14.300 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86935 00:26:14.300 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:14.300 [2024-07-13 03:14:22.115364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:14.300 [2024-07-13 03:14:22.115531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86935 ] 00:26:14.300 [2024-07-13 03:14:22.301429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.300 [2024-07-13 03:14:22.529274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.300 [2024-07-13 03:14:22.723376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:14.300 Running I/O for 90 seconds... 00:26:14.300 [2024-07-13 03:14:32.356415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.356909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.356945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.300 [2024-07-13 03:14:32.357500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.357912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.357973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.300 [2024-07-13 03:14:32.358410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:14.300 [2024-07-13 03:14:32.358439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.358812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.358848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.360913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.360966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.361394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.361781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.361803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.363711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.363732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.364773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.364807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.367587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.367956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.367977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.368413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.368435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.301 [2024-07-13 03:14:32.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.371816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.371883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.371928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.372012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:14.301 [2024-07-13 03:14:32.372045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.301 [2024-07-13 03:14:32.372066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:32.372117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:32.372168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:32.372218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:32.372272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:32.372870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:32.372891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.897967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.897996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.898613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.898961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.898996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.899050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.302 [2024-07-13 03:14:38.899101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:14.302 [2024-07-13 03:14:38.899440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.302 [2024-07-13 03:14:38.899461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.899996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.900840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.900957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.900999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.901855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.901905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.901952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.901975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.902025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.902106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.902155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.902205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.303 [2024-07-13 03:14:38.902255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:14.303 [2024-07-13 03:14:38.902972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.303 [2024-07-13 03:14:38.902994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.903314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.903680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.903726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:38.904622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.904726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.904788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.904849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.904910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.904963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.904999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:38.905453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:38.905476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.022947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.022969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.023040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.023090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.023140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.023190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.023951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.023992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.304 [2024-07-13 03:14:46.024592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.024917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.024957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.304 [2024-07-13 03:14:46.025412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:14.304 [2024-07-13 03:14:46.025442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.025973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.025995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.026957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.026999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.027971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.027993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.028043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.028095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.028146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.028197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.028248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.028618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.028655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.029653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.305 [2024-07-13 03:14:46.029692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.029742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.029766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.029806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.029828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.029867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.029903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.029946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.029969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:14.305 [2024-07-13 03:14:46.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.305 [2024-07-13 03:14:46.030748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.364897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.364972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.365057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.365107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.365906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.365925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.366797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.366976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.366994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.306 [2024-07-13 03:14:59.367715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.367967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.367986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.306 [2024-07-13 03:14:59.368183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.306 [2024-07-13 03:14:59.368201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.307 [2024-07-13 03:14:59.368630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.368950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.368972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.369281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:26:14.307 [2024-07-13 03:14:59.369330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16424 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16432 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16440 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16456 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16464 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16472 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.369951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.369965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.369979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.369997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16488 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16496 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16504 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16520 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16528 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16536 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16552 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16560 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16568 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.307 [2024-07-13 03:14:59.370713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.307 [2024-07-13 03:14:59.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:8 PRP1 0x0 PRP2 0x0 00:26:14.307 [2024-07-13 03:14:59.370746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.370989] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:26:14.307 [2024-07-13 03:14:59.371147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.307 [2024-07-13 03:14:59.371179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.371213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.307 [2024-07-13 03:14:59.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.371249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.307 [2024-07-13 03:14:59.371266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.371282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.307 [2024-07-13 03:14:59.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.371317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.307 [2024-07-13 03:14:59.371334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.307 [2024-07-13 03:14:59.371360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:26:14.307 [2024-07-13 03:14:59.372604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:14.307 [2024-07-13 03:14:59.372681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:14.307 [2024-07-13 03:14:59.373261] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.307 [2024-07-13 03:14:59.373304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:26:14.307 [2024-07-13 03:14:59.373328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:26:14.307 [2024-07-13 03:14:59.373391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:26:14.307 [2024-07-13 03:14:59.373449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:14.307 [2024-07-13 03:14:59.373479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:14.307 [2024-07-13 03:14:59.373498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:14.307 [2024-07-13 03:14:59.373544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.307 [2024-07-13 03:14:59.373568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:14.307 [2024-07-13 03:15:09.458837] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:14.307 Received shutdown signal, test time was about 55.464610 seconds 00:26:14.307 00:26:14.307 Latency(us) 00:26:14.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.307 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:14.307 Verification LBA range: start 0x0 length 0x4000 00:26:14.308 Nvme0n1 : 55.46 5628.61 21.99 0.00 0.00 22709.96 1414.98 7046430.72 00:26:14.308 =================================================================================================================== 00:26:14.308 Total : 5628.61 21.99 0.00 0.00 22709.96 1414.98 7046430.72 00:26:14.308 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.565 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:14.565 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:14.565 03:15:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:14.565 03:15:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.565 03:15:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:26:14.565 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.565 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:26:14.565 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.565 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.565 rmmod nvme_tcp 00:26:14.565 rmmod nvme_fabrics 00:26:14.823 rmmod nvme_keyring 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 86875 ']' 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 86875 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 86875 ']' 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 86875 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86875 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:14.823 killing process with pid 86875 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86875' 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 86875 00:26:14.823 03:15:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 86875 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:16.199 00:26:16.199 real 1m3.831s 00:26:16.199 user 2m56.733s 00:26:16.199 sys 0m17.208s 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:16.199 ************************************ 00:26:16.199 END TEST nvmf_host_multipath 00:26:16.199 03:15:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:16.199 ************************************ 00:26:16.457 03:15:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:16.457 03:15:22 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:16.458 03:15:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:16.458 03:15:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.458 03:15:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:16.458 ************************************ 00:26:16.458 START TEST nvmf_timeout 00:26:16.458 ************************************ 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:16.458 * Looking for test storage... 00:26:16.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:16.458 Cannot find device "nvmf_tgt_br" 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:16.458 Cannot find device "nvmf_tgt_br2" 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:16.458 Cannot find device "nvmf_tgt_br" 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:16.458 Cannot find device "nvmf_tgt_br2" 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:26:16.458 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:16.459 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:16.459 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:16.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:16.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:16.716 03:15:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:16.716 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:16.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:26:16.717 00:26:16.717 --- 10.0.0.2 ping statistics --- 00:26:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.717 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:16.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:16.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:26:16.717 00:26:16.717 --- 10.0.0.3 ping statistics --- 00:26:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.717 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:16.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:16.717 00:26:16.717 --- 10.0.0.1 ping statistics --- 00:26:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.717 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=88051 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 88051 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88051 ']' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.717 03:15:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.974 [2024-07-13 03:15:23.307468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:16.974 [2024-07-13 03:15:23.307632] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.232 [2024-07-13 03:15:23.488478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:17.490 [2024-07-13 03:15:23.770854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.490 [2024-07-13 03:15:23.770938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.490 [2024-07-13 03:15:23.770959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.490 [2024-07-13 03:15:23.770977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.490 [2024-07-13 03:15:23.770991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.490 [2024-07-13 03:15:23.771167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.490 [2024-07-13 03:15:23.771376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.748 [2024-07-13 03:15:23.999556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:18.006 03:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:18.264 [2024-07-13 03:15:24.536553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.264 03:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:18.522 Malloc0 00:26:18.522 03:15:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.782 03:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.041 03:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.299 [2024-07-13 03:15:25.734395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88100 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88100 /var/tmp/bdevperf.sock 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88100 ']' 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.299 03:15:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.556 [2024-07-13 03:15:25.852536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:19.556 [2024-07-13 03:15:25.852677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88100 ] 00:26:19.556 [2024-07-13 03:15:26.020789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.814 [2024-07-13 03:15:26.226320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.073 [2024-07-13 03:15:26.424414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:20.331 03:15:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.331 03:15:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:20.331 03:15:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:20.591 03:15:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:21.159 NVMe0n1 00:26:21.159 03:15:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88124 00:26:21.159 03:15:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:21.159 03:15:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:21.159 Running I/O for 10 seconds... 00:26:22.095 03:15:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.358 [2024-07-13 03:15:28.650653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.358 [2024-07-13 03:15:28.650733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.650759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.358 [2024-07-13 03:15:28.650777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.650810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.358 [2024-07-13 03:15:28.650825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.650866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.358 [2024-07-13 03:15:28.650897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.650914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:22.358 [2024-07-13 03:15:28.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.358 [2024-07-13 03:15:28.651628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.358 [2024-07-13 03:15:28.651922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.358 [2024-07-13 03:15:28.651941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.651971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.651992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.652599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.652936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.359 [2024-07-13 03:15:28.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.359 [2024-07-13 03:15:28.653241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.359 [2024-07-13 03:15:28.653258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.360 [2024-07-13 03:15:28.653922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.653979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.653996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.360 [2024-07-13 03:15:28.654401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.360 [2024-07-13 03:15:28.654419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.654675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.654992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.361 [2024-07-13 03:15:28.655275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.361 [2024-07-13 03:15:28.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.361 [2024-07-13 03:15:28.655524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.362 [2024-07-13 03:15:28.655788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.655822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.655856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.655905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.655942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.655978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.655995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.656013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.656030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.362 [2024-07-13 03:15:28.656052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.656108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.362 [2024-07-13 03:15:28.656129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.362 [2024-07-13 03:15:28.656145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:26:22.362 [2024-07-13 03:15:28.656163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.362 [2024-07-13 03:15:28.656420] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:22.362 [2024-07-13 03:15:28.656721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.362 [2024-07-13 03:15:28.656763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:22.362 [2024-07-13 03:15:28.656915] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.362 [2024-07-13 03:15:28.656954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:22.362 [2024-07-13 03:15:28.656973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:22.362 [2024-07-13 03:15:28.657025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:22.362 [2024-07-13 03:15:28.657067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.362 [2024-07-13 03:15:28.657088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.362 [2024-07-13 03:15:28.657105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.362 [2024-07-13 03:15:28.657147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.362 [2024-07-13 03:15:28.657164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.362 03:15:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:24.327 [2024-07-13 03:15:30.657410] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.327 [2024-07-13 03:15:30.657496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:24.327 [2024-07-13 03:15:30.657521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:24.327 [2024-07-13 03:15:30.657562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:24.327 [2024-07-13 03:15:30.657593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.327 [2024-07-13 03:15:30.657615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.327 [2024-07-13 03:15:30.657632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.327 [2024-07-13 03:15:30.657675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.327 [2024-07-13 03:15:30.657694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.327 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:24.327 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.327 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:24.585 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:24.585 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:24.585 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:24.585 03:15:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:24.842 03:15:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:24.842 03:15:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:26.216 [2024-07-13 03:15:32.657948] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.216 [2024-07-13 03:15:32.658053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:26.216 [2024-07-13 03:15:32.658079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:26.216 [2024-07-13 03:15:32.658125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:26.216 [2024-07-13 03:15:32.658157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.216 [2024-07-13 03:15:32.658177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.216 [2024-07-13 03:15:32.658193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.216 [2024-07-13 03:15:32.658236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.216 [2024-07-13 03:15:32.658269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.744 [2024-07-13 03:15:34.658407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.744 [2024-07-13 03:15:34.658501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.744 [2024-07-13 03:15:34.658523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.744 [2024-07-13 03:15:34.658539] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:28.744 [2024-07-13 03:15:34.658586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.310 00:26:29.310 Latency(us) 00:26:29.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.310 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.310 Verification LBA range: start 0x0 length 0x4000 00:26:29.310 NVMe0n1 : 8.15 838.05 3.27 15.70 0.00 149694.49 4855.62 7046430.72 00:26:29.310 =================================================================================================================== 00:26:29.310 Total : 838.05 3.27 15.70 0.00 149694.49 4855.62 7046430.72 00:26:29.310 0 00:26:29.875 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:29.875 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.875 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:30.133 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:30.133 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:30.133 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:30.133 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 88124 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88100 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88100 ']' 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88100 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88100 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:30.391 killing process with pid 88100 00:26:30.391 Received shutdown signal, test time was about 9.295268 seconds 00:26:30.391 00:26:30.391 Latency(us) 00:26:30.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.391 =================================================================================================================== 00:26:30.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88100' 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88100 00:26:30.391 03:15:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88100 00:26:31.767 03:15:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.767 [2024-07-13 03:15:38.256322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88258 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88258 /var/tmp/bdevperf.sock 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88258 ']' 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.027 03:15:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 [2024-07-13 03:15:38.369519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:32.027 [2024-07-13 03:15:38.369666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88258 ] 00:26:32.286 [2024-07-13 03:15:38.539024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.286 [2024-07-13 03:15:38.743876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.545 [2024-07-13 03:15:38.941408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:32.804 03:15:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.804 03:15:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:32.804 03:15:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:33.063 03:15:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:33.323 NVMe0n1 00:26:33.582 03:15:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88277 00:26:33.582 03:15:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:33.582 03:15:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:33.582 Running I/O for 10 seconds... 00:26:34.516 03:15:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.777 [2024-07-13 03:15:41.058634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:26:34.777 [2024-07-13 03:15:41.059201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:26:34.777 [2024-07-13 03:15:41.059265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:26:34.777 [2024-07-13 03:15:41.059385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.059434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.059488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.059523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.059555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.059966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.059983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.060001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.060033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.060064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.777 [2024-07-13 03:15:41.060096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.777 [2024-07-13 03:15:41.060599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.777 [2024-07-13 03:15:41.060614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.060911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.060944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.060960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.060976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.061230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.061974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.061990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.778 [2024-07-13 03:15:41.062022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.062038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.062070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.778 [2024-07-13 03:15:41.062085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.778 [2024-07-13 03:15:41.062102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.062807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.062968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.062984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.063023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.779 [2024-07-13 03:15:41.063086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.779 [2024-07-13 03:15:41.063514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.779 [2024-07-13 03:15:41.063530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.780 [2024-07-13 03:15:41.063707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.063722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:26:34.780 [2024-07-13 03:15:41.063746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.780 [2024-07-13 03:15:41.063761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.780 [2024-07-13 03:15:41.063777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57632 len:8 PRP1 0x0 PRP2 0x0 00:26:34.780 [2024-07-13 03:15:41.063792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.780 [2024-07-13 03:15:41.064064] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:34.780 [2024-07-13 03:15:41.064379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.780 [2024-07-13 03:15:41.064505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.780 [2024-07-13 03:15:41.064654] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.780 [2024-07-13 03:15:41.064696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:34.780 [2024-07-13 03:15:41.064718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:34.780 [2024-07-13 03:15:41.064747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.780 [2024-07-13 03:15:41.064776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.780 [2024-07-13 03:15:41.064792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.780 [2024-07-13 03:15:41.064809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.780 [2024-07-13 03:15:41.064840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.780 [2024-07-13 03:15:41.064865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.780 03:15:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:35.716 [2024-07-13 03:15:42.065091] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.716 [2024-07-13 03:15:42.065162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:35.716 [2024-07-13 03:15:42.065189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:35.716 [2024-07-13 03:15:42.065228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:35.716 [2024-07-13 03:15:42.065261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.716 [2024-07-13 03:15:42.065276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.716 [2024-07-13 03:15:42.065295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.716 [2024-07-13 03:15:42.065335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.716 [2024-07-13 03:15:42.065356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.716 03:15:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.974 [2024-07-13 03:15:42.335094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.974 03:15:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 88277 00:26:36.921 [2024-07-13 03:15:43.081798] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:43.497 00:26:43.497 Latency(us) 00:26:43.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.497 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.497 Verification LBA range: start 0x0 length 0x4000 00:26:43.497 NVMe0n1 : 10.01 4893.14 19.11 0.00 0.00 26108.00 1586.27 3035150.89 00:26:43.497 =================================================================================================================== 00:26:43.497 Total : 4893.14 19.11 0.00 0.00 26108.00 1586.27 3035150.89 00:26:43.497 0 00:26:43.497 03:15:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88377 00:26:43.497 03:15:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:43.497 03:15:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.755 Running I/O for 10 seconds... 00:26:44.693 03:15:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.954 [2024-07-13 03:15:51.196474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.196811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.196859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.196902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.196930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.196973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.196988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.954 [2024-07-13 03:15:51.197606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.954 [2024-07-13 03:15:51.197933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.954 [2024-07-13 03:15:51.197949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.197977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.197994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.198381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.198983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.955 [2024-07-13 03:15:51.198996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.955 [2024-07-13 03:15:51.199217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.955 [2024-07-13 03:15:51.199232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.199766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.199973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.199987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.956 [2024-07-13 03:15:51.200000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.956 [2024-07-13 03:15:51.200454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.956 [2024-07-13 03:15:51.200500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.957 [2024-07-13 03:15:51.200514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.957 [2024-07-13 03:15:51.200526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:26:44.957 [2024-07-13 03:15:51.200539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.957 [2024-07-13 03:15:51.200776] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:26:44.957 [2024-07-13 03:15:51.200926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.957 [2024-07-13 03:15:51.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.957 [2024-07-13 03:15:51.200976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.957 [2024-07-13 03:15:51.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.957 [2024-07-13 03:15:51.201037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.957 [2024-07-13 03:15:51.201059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.957 [2024-07-13 03:15:51.201076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:44.957 [2024-07-13 03:15:51.201089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.957 [2024-07-13 03:15:51.201101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:44.957 [2024-07-13 03:15:51.201358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.957 [2024-07-13 03:15:51.201413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:44.957 [2024-07-13 03:15:51.201562] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.957 [2024-07-13 03:15:51.201592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:44.957 [2024-07-13 03:15:51.201608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:44.957 [2024-07-13 03:15:51.201636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:44.957 [2024-07-13 03:15:51.201660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.957 [2024-07-13 03:15:51.201675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.957 [2024-07-13 03:15:51.201689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.957 [2024-07-13 03:15:51.201717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.957 [2024-07-13 03:15:51.201734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.957 03:15:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:45.892 [2024-07-13 03:15:52.201903] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.892 [2024-07-13 03:15:52.202010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:45.892 [2024-07-13 03:15:52.202032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:45.892 [2024-07-13 03:15:52.202083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:45.892 [2024-07-13 03:15:52.202109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.892 [2024-07-13 03:15:52.202122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.892 [2024-07-13 03:15:52.202136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.892 [2024-07-13 03:15:52.202189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.892 [2024-07-13 03:15:52.202206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.826 [2024-07-13 03:15:53.202403] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.826 [2024-07-13 03:15:53.202502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:46.826 [2024-07-13 03:15:53.202523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:46.826 [2024-07-13 03:15:53.202560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:46.826 [2024-07-13 03:15:53.202586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.826 [2024-07-13 03:15:53.202600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.826 [2024-07-13 03:15:53.202614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.826 [2024-07-13 03:15:53.202651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.826 [2024-07-13 03:15:53.202668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.760 [2024-07-13 03:15:54.206848] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.760 [2024-07-13 03:15:54.206947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:47.760 [2024-07-13 03:15:54.206971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:47.760 [2024-07-13 03:15:54.207251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:47.760 [2024-07-13 03:15:54.207521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.760 [2024-07-13 03:15:54.207550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.760 [2024-07-13 03:15:54.207568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.760 [2024-07-13 03:15:54.211836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.760 [2024-07-13 03:15:54.211875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.760 03:15:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.019 [2024-07-13 03:15:54.479642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.019 03:15:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 88377 00:26:48.951 [2024-07-13 03:15:55.263368] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:54.242 00:26:54.242 Latency(us) 00:26:54.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.242 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.242 Verification LBA range: start 0x0 length 0x4000 00:26:54.242 NVMe0n1 : 10.01 4409.35 17.22 3381.04 0.00 16392.66 860.16 3019898.88 00:26:54.242 =================================================================================================================== 00:26:54.242 Total : 4409.35 17.22 3381.04 0.00 16392.66 0.00 3019898.88 00:26:54.242 0 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88258 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88258 ']' 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88258 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88258 00:26:54.242 killing process with pid 88258 00:26:54.242 Received shutdown signal, test time was about 10.000000 seconds 00:26:54.242 00:26:54.242 Latency(us) 00:26:54.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.242 =================================================================================================================== 00:26:54.242 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88258' 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88258 00:26:54.242 03:16:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88258 00:26:54.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88502 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88502 /var/tmp/bdevperf.sock 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88502 ']' 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.856 03:16:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.856 [2024-07-13 03:16:01.339537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:54.856 [2024-07-13 03:16:01.340524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88502 ] 00:26:55.115 [2024-07-13 03:16:01.518803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.373 [2024-07-13 03:16:01.713727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.632 [2024-07-13 03:16:01.904011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:55.890 03:16:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.890 03:16:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:55.890 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88517 00:26:55.890 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:55.890 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88502 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:56.148 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:56.407 NVMe0n1 00:26:56.407 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88560 00:26:56.407 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:56.407 03:16:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:56.666 Running I/O for 10 seconds... 00:26:57.600 03:16:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.862 [2024-07-13 03:16:04.108610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.862 [2024-07-13 03:16:04.108700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.108722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.862 [2024-07-13 03:16:04.108739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.108753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.862 [2024-07-13 03:16:04.108786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.108815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.862 [2024-07-13 03:16:04.108830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.108843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:57.862 [2024-07-13 03:16:04.109227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.109974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.109988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.862 [2024-07-13 03:16:04.110505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.862 [2024-07-13 03:16:04.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.110974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.110989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.111983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.111997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.863 [2024-07-13 03:16:04.112016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.863 [2024-07-13 03:16:04.112036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.112970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.112989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.864 [2024-07-13 03:16:04.113543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.864 [2024-07-13 03:16:04.113561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.865 [2024-07-13 03:16:04.113758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.113776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:26:57.865 [2024-07-13 03:16:04.113795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.865 [2024-07-13 03:16:04.113812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.865 [2024-07-13 03:16:04.113827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:26:57.865 [2024-07-13 03:16:04.113845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.865 [2024-07-13 03:16:04.114114] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:57.865 [2024-07-13 03:16:04.114453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.865 [2024-07-13 03:16:04.114499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:57.865 [2024-07-13 03:16:04.114640] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.865 [2024-07-13 03:16:04.114679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:57.865 [2024-07-13 03:16:04.114698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:57.865 [2024-07-13 03:16:04.114744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:57.865 [2024-07-13 03:16:04.114771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.865 [2024-07-13 03:16:04.114809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.865 [2024-07-13 03:16:04.114824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.865 [2024-07-13 03:16:04.114858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.865 [2024-07-13 03:16:04.114895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.865 03:16:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 88560 00:26:59.770 [2024-07-13 03:16:06.115143] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.770 [2024-07-13 03:16:06.115220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:59.770 [2024-07-13 03:16:06.115267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:59.770 [2024-07-13 03:16:06.115331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:59.770 [2024-07-13 03:16:06.115364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.770 [2024-07-13 03:16:06.115383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.770 [2024-07-13 03:16:06.115400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.770 [2024-07-13 03:16:06.115449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.770 [2024-07-13 03:16:06.115469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.673 [2024-07-13 03:16:08.115715] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.673 [2024-07-13 03:16:08.115804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:27:01.673 [2024-07-13 03:16:08.115843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:27:01.673 [2024-07-13 03:16:08.115881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:01.673 [2024-07-13 03:16:08.115910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.673 [2024-07-13 03:16:08.115943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.673 [2024-07-13 03:16:08.115958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.673 [2024-07-13 03:16:08.116033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.673 [2024-07-13 03:16:08.116051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.205 [2024-07-13 03:16:10.116177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.205 [2024-07-13 03:16:10.116237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.205 [2024-07-13 03:16:10.116256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.205 [2024-07-13 03:16:10.116272] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:04.205 [2024-07-13 03:16:10.116319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.772 00:27:04.772 Latency(us) 00:27:04.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.772 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:04.772 NVMe0n1 : 8.15 1558.26 6.09 15.71 0.00 81224.96 10724.07 7046430.72 00:27:04.772 =================================================================================================================== 00:27:04.772 Total : 1558.26 6.09 15.71 0.00 81224.96 10724.07 7046430.72 00:27:04.772 0 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:04.772 Attaching 5 probes... 00:27:04.772 1348.704093: reset bdev controller NVMe0 00:27:04.772 1348.816405: reconnect bdev controller NVMe0 00:27:04.772 3349.260508: reconnect delay bdev controller NVMe0 00:27:04.772 3349.285662: reconnect bdev controller NVMe0 00:27:04.772 5349.779466: reconnect delay bdev controller NVMe0 00:27:04.772 5349.817680: reconnect bdev controller NVMe0 00:27:04.772 7350.372173: reconnect delay bdev controller NVMe0 00:27:04.772 7350.409870: reconnect bdev controller NVMe0 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 88517 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88502 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88502 ']' 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88502 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88502 00:27:04.772 killing process with pid 88502 00:27:04.772 Received shutdown signal, test time was about 8.210664 seconds 00:27:04.772 00:27:04.772 Latency(us) 00:27:04.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.772 =================================================================================================================== 00:27:04.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88502' 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88502 00:27:04.772 03:16:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88502 00:27:06.151 03:16:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.151 03:16:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:06.151 03:16:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:06.151 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:06.151 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:06.410 rmmod nvme_tcp 00:27:06.410 rmmod nvme_fabrics 00:27:06.410 rmmod nvme_keyring 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 88051 ']' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 88051 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88051 ']' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88051 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88051 00:27:06.410 killing process with pid 88051 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88051' 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88051 00:27:06.410 03:16:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88051 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:07.805 00:27:07.805 real 0m51.488s 00:27:07.805 user 2m29.489s 00:27:07.805 sys 0m5.633s 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.805 03:16:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.805 ************************************ 00:27:07.805 END TEST nvmf_timeout 00:27:07.805 ************************************ 00:27:07.805 03:16:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:07.805 03:16:14 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:27:07.805 03:16:14 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:07.805 03:16:14 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.805 03:16:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.805 03:16:14 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:07.805 00:27:07.805 real 16m13.064s 00:27:07.805 user 42m28.782s 00:27:07.805 sys 4m3.403s 00:27:07.805 03:16:14 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.805 03:16:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.805 ************************************ 00:27:07.805 END TEST nvmf_tcp 00:27:07.805 ************************************ 00:27:08.063 03:16:14 -- common/autotest_common.sh@1142 -- # return 0 00:27:08.063 03:16:14 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:27:08.063 03:16:14 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:08.063 03:16:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:08.063 03:16:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.063 03:16:14 -- common/autotest_common.sh@10 -- # set +x 00:27:08.063 ************************************ 00:27:08.063 START TEST nvmf_dif 00:27:08.063 ************************************ 00:27:08.063 03:16:14 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:08.063 * Looking for test storage... 00:27:08.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:08.063 03:16:14 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.063 03:16:14 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:08.063 03:16:14 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.063 03:16:14 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.063 03:16:14 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.064 03:16:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.064 03:16:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.064 03:16:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.064 03:16:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:08.064 03:16:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.064 03:16:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:08.064 03:16:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:08.064 03:16:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:08.064 03:16:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:08.064 03:16:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.064 03:16:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:08.064 03:16:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:08.064 Cannot find device "nvmf_tgt_br" 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@155 -- # true 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:08.064 Cannot find device "nvmf_tgt_br2" 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@156 -- # true 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:08.064 Cannot find device "nvmf_tgt_br" 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@158 -- # true 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:08.064 Cannot find device "nvmf_tgt_br2" 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@159 -- # true 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:08.064 03:16:14 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:08.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:08.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:08.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:27:08.322 00:27:08.322 --- 10.0.0.2 ping statistics --- 00:27:08.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:08.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:08.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:08.322 00:27:08.322 --- 10.0.0.3 ping statistics --- 00:27:08.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.322 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:08.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:08.322 00:27:08.322 --- 10.0.0.1 ping statistics --- 00:27:08.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.322 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:27:08.322 03:16:14 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:08.323 03:16:14 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:08.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:08.897 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:08.897 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.897 03:16:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:08.897 03:16:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=89011 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 89011 00:27:08.897 03:16:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 89011 ']' 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.897 03:16:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:08.897 [2024-07-13 03:16:15.310153] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:08.897 [2024-07-13 03:16:15.310341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.166 [2024-07-13 03:16:15.493815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.424 [2024-07-13 03:16:15.744257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.424 [2024-07-13 03:16:15.744346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.424 [2024-07-13 03:16:15.744366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.424 [2024-07-13 03:16:15.744392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.424 [2024-07-13 03:16:15.744415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.424 [2024-07-13 03:16:15.744462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.682 [2024-07-13 03:16:15.959857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:09.941 03:16:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 03:16:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.941 03:16:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:09.941 03:16:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 [2024-07-13 03:16:16.279124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.941 03:16:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 ************************************ 00:27:09.941 START TEST fio_dif_1_default 00:27:09.941 ************************************ 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 bdev_null0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.941 [2024-07-13 03:16:16.323369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:09.941 { 00:27:09.941 "params": { 00:27:09.941 "name": "Nvme$subsystem", 00:27:09.941 "trtype": "$TEST_TRANSPORT", 00:27:09.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.941 "adrfam": "ipv4", 00:27:09.941 "trsvcid": "$NVMF_PORT", 00:27:09.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.941 "hdgst": ${hdgst:-false}, 00:27:09.941 "ddgst": ${ddgst:-false} 00:27:09.941 }, 00:27:09.941 "method": "bdev_nvme_attach_controller" 00:27:09.941 } 00:27:09.941 EOF 00:27:09.941 )") 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:09.941 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:09.942 "params": { 00:27:09.942 "name": "Nvme0", 00:27:09.942 "trtype": "tcp", 00:27:09.942 "traddr": "10.0.0.2", 00:27:09.942 "adrfam": "ipv4", 00:27:09.942 "trsvcid": "4420", 00:27:09.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.942 "hdgst": false, 00:27:09.942 "ddgst": false 00:27:09.942 }, 00:27:09.942 "method": "bdev_nvme_attach_controller" 00:27:09.942 }' 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:09.942 03:16:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.201 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:10.201 fio-3.35 00:27:10.201 Starting 1 thread 00:27:22.401 00:27:22.401 filename0: (groupid=0, jobs=1): err= 0: pid=89075: Sat Jul 13 03:16:27 2024 00:27:22.401 read: IOPS=6157, BW=24.1MiB/s (25.2MB/s)(241MiB/10001msec) 00:27:22.401 slat (usec): min=7, max=201, avg=12.82, stdev= 6.83 00:27:22.401 clat (usec): min=450, max=2929, avg=610.98, stdev=58.92 00:27:22.401 lat (usec): min=458, max=2947, avg=623.80, stdev=60.13 00:27:22.401 clat percentiles (usec): 00:27:22.401 | 1.00th=[ 494], 5.00th=[ 523], 10.00th=[ 537], 20.00th=[ 562], 00:27:22.401 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 627], 00:27:22.401 | 70.00th=[ 635], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 701], 00:27:22.401 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 947], 99.95th=[ 996], 00:27:22.401 | 99.99th=[ 1336] 00:27:22.401 bw ( KiB/s): min=23872, max=25120, per=100.00%, avg=24645.05, stdev=281.66, samples=19 00:27:22.401 iops : min= 5968, max= 6280, avg=6161.26, stdev=70.42, samples=19 00:27:22.401 lat (usec) : 500=1.56%, 750=97.60%, 1000=0.80% 00:27:22.401 lat (msec) : 2=0.04%, 4=0.01% 00:27:22.401 cpu : usr=86.16%, sys=11.70%, ctx=47, majf=0, minf=1062 00:27:22.401 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.401 issued rwts: total=61580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.401 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:22.401 00:27:22.401 Run status group 0 (all jobs): 00:27:22.401 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=241MiB (252MB), run=10001-10001msec 00:27:22.401 ----------------------------------------------------- 00:27:22.401 Suppressions used: 00:27:22.401 count bytes template 00:27:22.401 1 8 /usr/src/fio/parse.c 00:27:22.401 1 8 libtcmalloc_minimal.so 00:27:22.401 1 904 libcrypto.so 00:27:22.401 ----------------------------------------------------- 00:27:22.401 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 00:27:22.401 real 0m12.368s 00:27:22.401 user 0m10.563s 00:27:22.401 sys 0m1.527s 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 ************************************ 00:27:22.401 END TEST fio_dif_1_default 00:27:22.401 ************************************ 00:27:22.401 03:16:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:22.401 03:16:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:22.401 03:16:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:22.401 03:16:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 ************************************ 00:27:22.401 START TEST fio_dif_1_multi_subsystems 00:27:22.401 ************************************ 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 bdev_null0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 [2024-07-13 03:16:28.742021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 bdev_null1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:22.401 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.402 { 00:27:22.402 "params": { 00:27:22.402 "name": "Nvme$subsystem", 00:27:22.402 "trtype": "$TEST_TRANSPORT", 00:27:22.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.402 "adrfam": "ipv4", 00:27:22.402 "trsvcid": "$NVMF_PORT", 00:27:22.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.402 "hdgst": ${hdgst:-false}, 00:27:22.402 "ddgst": ${ddgst:-false} 00:27:22.402 }, 00:27:22.402 "method": "bdev_nvme_attach_controller" 00:27:22.402 } 00:27:22.402 EOF 00:27:22.402 )") 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.402 { 00:27:22.402 "params": { 00:27:22.402 "name": "Nvme$subsystem", 00:27:22.402 "trtype": "$TEST_TRANSPORT", 00:27:22.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.402 "adrfam": "ipv4", 00:27:22.402 "trsvcid": "$NVMF_PORT", 00:27:22.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.402 "hdgst": ${hdgst:-false}, 00:27:22.402 "ddgst": ${ddgst:-false} 00:27:22.402 }, 00:27:22.402 "method": "bdev_nvme_attach_controller" 00:27:22.402 } 00:27:22.402 EOF 00:27:22.402 )") 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:22.402 "params": { 00:27:22.402 "name": "Nvme0", 00:27:22.402 "trtype": "tcp", 00:27:22.402 "traddr": "10.0.0.2", 00:27:22.402 "adrfam": "ipv4", 00:27:22.402 "trsvcid": "4420", 00:27:22.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:22.402 "hdgst": false, 00:27:22.402 "ddgst": false 00:27:22.402 }, 00:27:22.402 "method": "bdev_nvme_attach_controller" 00:27:22.402 },{ 00:27:22.402 "params": { 00:27:22.402 "name": "Nvme1", 00:27:22.402 "trtype": "tcp", 00:27:22.402 "traddr": "10.0.0.2", 00:27:22.402 "adrfam": "ipv4", 00:27:22.402 "trsvcid": "4420", 00:27:22.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.402 "hdgst": false, 00:27:22.402 "ddgst": false 00:27:22.402 }, 00:27:22.402 "method": "bdev_nvme_attach_controller" 00:27:22.402 }' 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:22.402 03:16:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:22.661 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:22.661 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:22.661 fio-3.35 00:27:22.661 Starting 2 threads 00:27:34.878 00:27:34.878 filename0: (groupid=0, jobs=1): err= 0: pid=89234: Sat Jul 13 03:16:39 2024 00:27:34.878 read: IOPS=3811, BW=14.9MiB/s (15.6MB/s)(149MiB/10001msec) 00:27:34.878 slat (nsec): min=7308, max=84431, avg=15933.08, stdev=6014.50 00:27:34.878 clat (usec): min=723, max=8200, avg=1004.77, stdev=138.29 00:27:34.878 lat (usec): min=731, max=8238, avg=1020.71, stdev=139.90 00:27:34.878 clat percentiles (usec): 00:27:34.878 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 898], 00:27:34.878 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1029], 00:27:34.878 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:27:34.878 | 99.00th=[ 1336], 99.50th=[ 1434], 99.90th=[ 1565], 99.95th=[ 1631], 00:27:34.878 | 99.99th=[ 8160] 00:27:34.878 bw ( KiB/s): min=13760, max=17056, per=50.24%, avg=15321.32, stdev=1108.15, samples=19 00:27:34.878 iops : min= 3440, max= 4264, avg=3830.32, stdev=277.03, samples=19 00:27:34.879 lat (usec) : 750=0.08%, 1000=50.55% 00:27:34.879 lat (msec) : 2=49.36%, 10=0.01% 00:27:34.879 cpu : usr=90.87%, sys=7.65%, ctx=25, majf=0, minf=1074 00:27:34.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.879 issued rwts: total=38120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:34.879 filename1: (groupid=0, jobs=1): err= 0: pid=89235: Sat Jul 13 03:16:39 2024 00:27:34.879 read: IOPS=3812, BW=14.9MiB/s (15.6MB/s)(149MiB/10001msec) 00:27:34.879 slat (nsec): min=7658, max=86410, avg=15985.78, stdev=6131.19 00:27:34.879 clat (usec): min=580, max=5861, avg=1004.09, stdev=121.08 00:27:34.879 lat (usec): min=589, max=5897, avg=1020.07, stdev=122.35 00:27:34.879 clat percentiles (usec): 00:27:34.879 | 1.00th=[ 816], 5.00th=[ 848], 10.00th=[ 873], 20.00th=[ 898], 00:27:34.879 | 30.00th=[ 930], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1029], 00:27:34.879 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:27:34.879 | 99.00th=[ 1336], 99.50th=[ 1418], 99.90th=[ 1565], 99.95th=[ 1631], 00:27:34.879 | 99.99th=[ 5800] 00:27:34.879 bw ( KiB/s): min=13824, max=17024, per=50.25%, avg=15324.63, stdev=1103.98, samples=19 00:27:34.879 iops : min= 3456, max= 4256, avg=3831.16, stdev=276.00, samples=19 00:27:34.879 lat (usec) : 750=0.04%, 1000=50.00% 00:27:34.879 lat (msec) : 2=49.96%, 10=0.01% 00:27:34.879 cpu : usr=90.68%, sys=7.85%, ctx=16, majf=0, minf=1075 00:27:34.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.879 issued rwts: total=38132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:34.879 00:27:34.879 Run status group 0 (all jobs): 00:27:34.879 READ: bw=29.8MiB/s (31.2MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.6MB/s), io=298MiB (312MB), run=10001-10001msec 00:27:34.879 ----------------------------------------------------- 00:27:34.879 Suppressions used: 00:27:34.879 count bytes template 00:27:34.879 2 16 /usr/src/fio/parse.c 00:27:34.879 1 8 libtcmalloc_minimal.so 00:27:34.879 1 904 libcrypto.so 00:27:34.879 ----------------------------------------------------- 00:27:34.879 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.879 00:27:34.879 real 0m12.610s 00:27:34.879 user 0m20.290s 00:27:34.879 sys 0m1.945s 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.879 ************************************ 00:27:34.879 END TEST fio_dif_1_multi_subsystems 00:27:34.879 03:16:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.879 ************************************ 00:27:34.879 03:16:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:34.879 03:16:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:34.879 03:16:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:34.879 03:16:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.879 03:16:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:35.138 ************************************ 00:27:35.138 START TEST fio_dif_rand_params 00:27:35.138 ************************************ 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:35.138 bdev_null0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:35.138 [2024-07-13 03:16:41.407459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:35.138 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.139 { 00:27:35.139 "params": { 00:27:35.139 "name": "Nvme$subsystem", 00:27:35.139 "trtype": "$TEST_TRANSPORT", 00:27:35.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.139 "adrfam": "ipv4", 00:27:35.139 "trsvcid": "$NVMF_PORT", 00:27:35.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.139 "hdgst": ${hdgst:-false}, 00:27:35.139 "ddgst": ${ddgst:-false} 00:27:35.139 }, 00:27:35.139 "method": "bdev_nvme_attach_controller" 00:27:35.139 } 00:27:35.139 EOF 00:27:35.139 )") 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:35.139 "params": { 00:27:35.139 "name": "Nvme0", 00:27:35.139 "trtype": "tcp", 00:27:35.139 "traddr": "10.0.0.2", 00:27:35.139 "adrfam": "ipv4", 00:27:35.139 "trsvcid": "4420", 00:27:35.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.139 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:35.139 "hdgst": false, 00:27:35.139 "ddgst": false 00:27:35.139 }, 00:27:35.139 "method": "bdev_nvme_attach_controller" 00:27:35.139 }' 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:35.139 03:16:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.398 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:35.398 ... 00:27:35.398 fio-3.35 00:27:35.398 Starting 3 threads 00:27:41.959 00:27:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=89394: Sat Jul 13 03:16:47 2024 00:27:41.959 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5008msec) 00:27:41.959 slat (nsec): min=5516, max=82449, avg=20424.18, stdev=8344.13 00:27:41.959 clat (usec): min=13756, max=17425, avg=15126.95, stdev=697.44 00:27:41.959 lat (usec): min=13772, max=17455, avg=15147.37, stdev=698.34 00:27:41.959 clat percentiles (usec): 00:27:41.959 | 1.00th=[13829], 5.00th=[13960], 10.00th=[14091], 20.00th=[14484], 00:27:41.959 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:27:41.959 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16319], 00:27:41.959 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:27:41.959 | 99.99th=[17433] 00:27:41.959 bw ( KiB/s): min=23760, max=26112, per=33.28%, avg=25262.40, stdev=773.99, samples=10 00:27:41.959 iops : min= 185, max= 204, avg=197.30, stdev= 6.18, samples=10 00:27:41.959 lat (msec) : 20=100.00% 00:27:41.959 cpu : usr=91.31%, sys=7.81%, ctx=39, majf=0, minf=1075 00:27:41.959 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 issued rwts: total=990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=89395: Sat Jul 13 03:16:47 2024 00:27:41.959 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5008msec) 00:27:41.959 slat (nsec): min=8104, max=76774, avg=20807.04, stdev=7874.91 00:27:41.959 clat (usec): min=13853, max=17071, avg=15125.47, stdev=688.12 00:27:41.959 lat (usec): min=13868, max=17097, avg=15146.28, stdev=689.34 00:27:41.959 clat percentiles (usec): 00:27:41.959 | 1.00th=[13829], 5.00th=[13960], 10.00th=[14091], 20.00th=[14484], 00:27:41.959 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:27:41.959 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16319], 00:27:41.959 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:27:41.959 | 99.99th=[17171] 00:27:41.959 bw ( KiB/s): min=23808, max=26112, per=33.29%, avg=25267.20, stdev=763.72, samples=10 00:27:41.959 iops : min= 186, max= 204, avg=197.40, stdev= 5.97, samples=10 00:27:41.959 lat (msec) : 20=100.00% 00:27:41.959 cpu : usr=91.63%, sys=7.65%, ctx=6, majf=0, minf=1075 00:27:41.959 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 issued rwts: total=990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:41.959 filename0: (groupid=0, jobs=1): err= 0: pid=89396: Sat Jul 13 03:16:47 2024 00:27:41.959 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5006msec) 00:27:41.959 slat (nsec): min=5580, max=78180, avg=20913.95, stdev=7944.52 00:27:41.959 clat (usec): min=13846, max=17054, avg=15120.02, stdev=682.32 00:27:41.959 lat (usec): min=13861, max=17083, avg=15140.93, stdev=683.60 00:27:41.959 clat percentiles (usec): 00:27:41.959 | 1.00th=[13829], 5.00th=[13960], 10.00th=[14091], 20.00th=[14484], 00:27:41.959 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:27:41.959 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16319], 00:27:41.959 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:27:41.959 | 99.99th=[17171] 00:27:41.959 bw ( KiB/s): min=23808, max=26112, per=33.39%, avg=25344.00, stdev=768.00, samples=9 00:27:41.959 iops : min= 186, max= 204, avg=198.00, stdev= 6.00, samples=9 00:27:41.959 lat (msec) : 20=100.00% 00:27:41.959 cpu : usr=92.01%, sys=7.29%, ctx=31, majf=0, minf=1073 00:27:41.959 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.959 issued rwts: total=990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.959 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:41.959 00:27:41.959 Run status group 0 (all jobs): 00:27:41.959 READ: bw=74.1MiB/s (77.7MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=371MiB (389MB), run=5006-5008msec 00:27:42.527 ----------------------------------------------------- 00:27:42.527 Suppressions used: 00:27:42.527 count bytes template 00:27:42.527 5 44 /usr/src/fio/parse.c 00:27:42.527 1 8 libtcmalloc_minimal.so 00:27:42.528 1 904 libcrypto.so 00:27:42.528 ----------------------------------------------------- 00:27:42.528 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 bdev_null0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 [2024-07-13 03:16:48.793359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 bdev_null1 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 bdev_null2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.528 { 00:27:42.528 "params": { 00:27:42.528 "name": "Nvme$subsystem", 00:27:42.528 "trtype": "$TEST_TRANSPORT", 00:27:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.528 "adrfam": "ipv4", 00:27:42.528 "trsvcid": "$NVMF_PORT", 00:27:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.528 "hdgst": ${hdgst:-false}, 00:27:42.528 "ddgst": ${ddgst:-false} 00:27:42.528 }, 00:27:42.528 "method": "bdev_nvme_attach_controller" 00:27:42.528 } 00:27:42.528 EOF 00:27:42.528 )") 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.528 { 00:27:42.528 "params": { 00:27:42.528 "name": "Nvme$subsystem", 00:27:42.528 "trtype": "$TEST_TRANSPORT", 00:27:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.528 "adrfam": "ipv4", 00:27:42.528 "trsvcid": "$NVMF_PORT", 00:27:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.528 "hdgst": ${hdgst:-false}, 00:27:42.528 "ddgst": ${ddgst:-false} 00:27:42.528 }, 00:27:42.528 "method": "bdev_nvme_attach_controller" 00:27:42.528 } 00:27:42.528 EOF 00:27:42.528 )") 00:27:42.528 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.529 { 00:27:42.529 "params": { 00:27:42.529 "name": "Nvme$subsystem", 00:27:42.529 "trtype": "$TEST_TRANSPORT", 00:27:42.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.529 "adrfam": "ipv4", 00:27:42.529 "trsvcid": "$NVMF_PORT", 00:27:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.529 "hdgst": ${hdgst:-false}, 00:27:42.529 "ddgst": ${ddgst:-false} 00:27:42.529 }, 00:27:42.529 "method": "bdev_nvme_attach_controller" 00:27:42.529 } 00:27:42.529 EOF 00:27:42.529 )") 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.529 "params": { 00:27:42.529 "name": "Nvme0", 00:27:42.529 "trtype": "tcp", 00:27:42.529 "traddr": "10.0.0.2", 00:27:42.529 "adrfam": "ipv4", 00:27:42.529 "trsvcid": "4420", 00:27:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.529 "hdgst": false, 00:27:42.529 "ddgst": false 00:27:42.529 }, 00:27:42.529 "method": "bdev_nvme_attach_controller" 00:27:42.529 },{ 00:27:42.529 "params": { 00:27:42.529 "name": "Nvme1", 00:27:42.529 "trtype": "tcp", 00:27:42.529 "traddr": "10.0.0.2", 00:27:42.529 "adrfam": "ipv4", 00:27:42.529 "trsvcid": "4420", 00:27:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.529 "hdgst": false, 00:27:42.529 "ddgst": false 00:27:42.529 }, 00:27:42.529 "method": "bdev_nvme_attach_controller" 00:27:42.529 },{ 00:27:42.529 "params": { 00:27:42.529 "name": "Nvme2", 00:27:42.529 "trtype": "tcp", 00:27:42.529 "traddr": "10.0.0.2", 00:27:42.529 "adrfam": "ipv4", 00:27:42.529 "trsvcid": "4420", 00:27:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.529 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:42.529 "hdgst": false, 00:27:42.529 "ddgst": false 00:27:42.529 }, 00:27:42.529 "method": "bdev_nvme_attach_controller" 00:27:42.529 }' 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:42.529 03:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.787 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:42.787 ... 00:27:42.787 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:42.787 ... 00:27:42.787 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:42.787 ... 00:27:42.787 fio-3.35 00:27:42.787 Starting 24 threads 00:27:55.002 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89497: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=199, BW=799KiB/s (818kB/s)(7996KiB/10012msec) 00:27:55.002 slat (usec): min=4, max=8035, avg=30.70, stdev=319.37 00:27:55.002 clat (msec): min=13, max=143, avg=79.98, stdev=21.58 00:27:55.002 lat (msec): min=13, max=143, avg=80.01, stdev=21.58 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 61], 00:27:55.002 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 86], 00:27:55.002 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:27:55.002 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.002 | 99.99th=[ 144] 00:27:55.002 bw ( KiB/s): min= 712, max= 928, per=4.35%, avg=791.21, stdev=59.88, samples=19 00:27:55.002 iops : min= 178, max= 232, avg=197.79, stdev=14.97, samples=19 00:27:55.002 lat (msec) : 20=0.35%, 50=6.40%, 100=80.34%, 250=12.91% 00:27:55.002 cpu : usr=32.86%, sys=1.90%, ctx=976, majf=0, minf=1075 00:27:55.002 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89498: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=191, BW=764KiB/s (783kB/s)(7660KiB/10020msec) 00:27:55.002 slat (usec): min=4, max=9031, avg=61.94, stdev=592.39 00:27:55.002 clat (msec): min=30, max=144, avg=83.42, stdev=21.00 00:27:55.002 lat (msec): min=30, max=144, avg=83.48, stdev=21.01 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 39], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 61], 00:27:55.002 | 30.00th=[ 70], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 91], 00:27:55.002 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.002 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.002 | 99.99th=[ 144] 00:27:55.002 bw ( KiB/s): min= 528, max= 872, per=4.16%, avg=756.37, stdev=70.79, samples=19 00:27:55.002 iops : min= 132, max= 218, avg=189.05, stdev=17.69, samples=19 00:27:55.002 lat (msec) : 50=3.86%, 100=78.43%, 250=17.70% 00:27:55.002 cpu : usr=31.63%, sys=1.72%, ctx=950, majf=0, minf=1075 00:27:55.002 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=78.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89499: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=199, BW=800KiB/s (819kB/s)(8028KiB/10041msec) 00:27:55.002 slat (usec): min=5, max=8035, avg=21.55, stdev=179.52 00:27:55.002 clat (msec): min=23, max=147, avg=79.91, stdev=21.59 00:27:55.002 lat (msec): min=23, max=147, avg=79.93, stdev=21.58 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:27:55.002 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 87], 00:27:55.002 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 120], 00:27:55.002 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:27:55.002 | 99.99th=[ 148] 00:27:55.002 bw ( KiB/s): min= 672, max= 960, per=4.37%, avg=795.75, stdev=76.15, samples=20 00:27:55.002 iops : min= 168, max= 240, avg=198.90, stdev=19.01, samples=20 00:27:55.002 lat (msec) : 50=6.68%, 100=80.37%, 250=12.95% 00:27:55.002 cpu : usr=35.82%, sys=2.11%, ctx=1065, majf=0, minf=1073 00:27:55.002 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89500: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=190, BW=761KiB/s (779kB/s)(7648KiB/10049msec) 00:27:55.002 slat (usec): min=5, max=8033, avg=25.34, stdev=209.63 00:27:55.002 clat (msec): min=23, max=155, avg=83.84, stdev=21.86 00:27:55.002 lat (msec): min=23, max=155, avg=83.87, stdev=21.86 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 62], 00:27:55.002 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 91], 00:27:55.002 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 123], 00:27:55.002 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 157], 00:27:55.002 | 99.99th=[ 157] 00:27:55.002 bw ( KiB/s): min= 528, max= 1024, per=4.17%, avg=758.40, stdev=92.15, samples=20 00:27:55.002 iops : min= 132, max= 256, avg=189.60, stdev=23.04, samples=20 00:27:55.002 lat (msec) : 50=5.28%, 100=79.71%, 250=15.01% 00:27:55.002 cpu : usr=34.05%, sys=2.09%, ctx=1453, majf=0, minf=1074 00:27:55.002 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89501: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=194, BW=779KiB/s (798kB/s)(7844KiB/10065msec) 00:27:55.002 slat (usec): min=5, max=4035, avg=25.55, stdev=181.36 00:27:55.002 clat (msec): min=17, max=159, avg=81.85, stdev=22.69 00:27:55.002 lat (msec): min=17, max=159, avg=81.88, stdev=22.69 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 22], 5.00th=[ 43], 10.00th=[ 56], 20.00th=[ 63], 00:27:55.002 | 30.00th=[ 66], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 89], 00:27:55.002 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.002 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 161], 00:27:55.002 | 99.99th=[ 161] 00:27:55.002 bw ( KiB/s): min= 664, max= 1256, per=4.27%, avg=777.35, stdev=123.64, samples=20 00:27:55.002 iops : min= 166, max= 314, avg=194.30, stdev=30.94, samples=20 00:27:55.002 lat (msec) : 20=0.71%, 50=5.97%, 100=76.80%, 250=16.52% 00:27:55.002 cpu : usr=40.77%, sys=2.61%, ctx=1178, majf=0, minf=1075 00:27:55.002 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89502: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=178, BW=716KiB/s (733kB/s)(7164KiB/10006msec) 00:27:55.002 slat (usec): min=5, max=8039, avg=33.32, stdev=345.50 00:27:55.002 clat (msec): min=9, max=155, avg=89.21, stdev=22.16 00:27:55.002 lat (msec): min=9, max=155, avg=89.25, stdev=22.16 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 31], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 71], 00:27:55.002 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 95], 00:27:55.002 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 131], 00:27:55.002 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:27:55.002 | 99.99th=[ 155] 00:27:55.002 bw ( KiB/s): min= 512, max= 816, per=3.85%, avg=701.47, stdev=79.69, samples=19 00:27:55.002 iops : min= 128, max= 204, avg=175.37, stdev=19.92, samples=19 00:27:55.002 lat (msec) : 10=0.39%, 20=0.34%, 50=2.01%, 100=73.87%, 250=23.39% 00:27:55.002 cpu : usr=37.87%, sys=2.19%, ctx=1111, majf=0, minf=1074 00:27:55.002 IO depths : 1=0.1%, 2=3.2%, 4=12.7%, 8=70.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=90.5%, 8=6.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 issued rwts: total=1791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.002 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.002 filename0: (groupid=0, jobs=1): err= 0: pid=89503: Sat Jul 13 03:17:00 2024 00:27:55.002 read: IOPS=184, BW=739KiB/s (757kB/s)(7448KiB/10074msec) 00:27:55.002 slat (usec): min=5, max=8032, avg=35.72, stdev=358.13 00:27:55.002 clat (msec): min=7, max=168, avg=86.26, stdev=27.55 00:27:55.002 lat (msec): min=7, max=168, avg=86.29, stdev=27.56 00:27:55.002 clat percentiles (msec): 00:27:55.002 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 57], 20.00th=[ 66], 00:27:55.002 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 95], 00:27:55.002 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 120], 95.00th=[ 132], 00:27:55.002 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:27:55.002 | 99.99th=[ 169] 00:27:55.002 bw ( KiB/s): min= 528, max= 1536, per=4.06%, avg=738.15, stdev=202.42, samples=20 00:27:55.002 iops : min= 132, max= 384, avg=184.50, stdev=50.61, samples=20 00:27:55.002 lat (msec) : 10=1.83%, 20=1.61%, 50=5.69%, 100=70.57%, 250=20.30% 00:27:55.002 cpu : usr=36.16%, sys=2.04%, ctx=997, majf=0, minf=1075 00:27:55.002 IO depths : 1=0.2%, 2=2.7%, 4=10.3%, 8=72.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:55.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.002 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename0: (groupid=0, jobs=1): err= 0: pid=89504: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=197, BW=791KiB/s (810kB/s)(7956KiB/10052msec) 00:27:55.003 slat (usec): min=5, max=8046, avg=37.91, stdev=359.50 00:27:55.003 clat (msec): min=26, max=143, avg=80.59, stdev=20.74 00:27:55.003 lat (msec): min=26, max=143, avg=80.63, stdev=20.74 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 61], 00:27:55.003 | 30.00th=[ 65], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 88], 00:27:55.003 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 118], 00:27:55.003 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.003 | 99.99th=[ 144] 00:27:55.003 bw ( KiB/s): min= 688, max= 920, per=4.34%, avg=790.85, stdev=58.24, samples=20 00:27:55.003 iops : min= 172, max= 230, avg=197.70, stdev=14.56, samples=20 00:27:55.003 lat (msec) : 50=3.67%, 100=83.46%, 250=12.87% 00:27:55.003 cpu : usr=41.21%, sys=2.49%, ctx=1273, majf=0, minf=1075 00:27:55.003 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=83.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89505: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=213, BW=853KiB/s (873kB/s)(8588KiB/10069msec) 00:27:55.003 slat (usec): min=5, max=4041, avg=21.82, stdev=150.26 00:27:55.003 clat (usec): min=1970, max=159945, avg=74736.31, stdev=31276.82 00:27:55.003 lat (usec): min=1982, max=159965, avg=74758.13, stdev=31279.00 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 24], 20.00th=[ 56], 00:27:55.003 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 85], 60.00th=[ 88], 00:27:55.003 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.003 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 157], 00:27:55.003 | 99.99th=[ 161] 00:27:55.003 bw ( KiB/s): min= 656, max= 2776, per=4.68%, avg=852.10, stdev=455.41, samples=20 00:27:55.003 iops : min= 164, max= 694, avg=213.00, stdev=113.86, samples=20 00:27:55.003 lat (msec) : 2=0.09%, 4=3.63%, 10=5.12%, 20=0.75%, 50=7.59% 00:27:55.003 lat (msec) : 100=67.54%, 250=15.28% 00:27:55.003 cpu : usr=41.74%, sys=2.27%, ctx=1234, majf=0, minf=1073 00:27:55.003 IO depths : 1=0.6%, 2=1.3%, 4=3.1%, 8=79.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89506: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=163, BW=653KiB/s (669kB/s)(6548KiB/10025msec) 00:27:55.003 slat (usec): min=5, max=9041, avg=30.95, stdev=329.58 00:27:55.003 clat (msec): min=25, max=180, avg=97.61, stdev=20.26 00:27:55.003 lat (msec): min=25, max=180, avg=97.64, stdev=20.25 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 40], 5.00th=[ 68], 10.00th=[ 81], 20.00th=[ 86], 00:27:55.003 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 96], 00:27:55.003 | 70.00th=[ 106], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 132], 00:27:55.003 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:27:55.003 | 99.99th=[ 182] 00:27:55.003 bw ( KiB/s): min= 400, max= 897, per=3.57%, avg=650.25, stdev=114.69, samples=20 00:27:55.003 iops : min= 100, max= 224, avg=162.50, stdev=28.65, samples=20 00:27:55.003 lat (msec) : 50=2.69%, 100=64.45%, 250=32.86% 00:27:55.003 cpu : usr=34.70%, sys=2.08%, ctx=982, majf=0, minf=1072 00:27:55.003 IO depths : 1=0.1%, 2=6.0%, 4=23.9%, 8=57.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=94.1%, 8=0.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89507: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=196, BW=788KiB/s (807kB/s)(7924KiB/10057msec) 00:27:55.003 slat (usec): min=5, max=12046, avg=23.65, stdev=270.33 00:27:55.003 clat (msec): min=21, max=143, avg=80.93, stdev=21.54 00:27:55.003 lat (msec): min=21, max=143, avg=80.95, stdev=21.53 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 63], 00:27:55.003 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 88], 00:27:55.003 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 118], 00:27:55.003 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.003 | 99.99th=[ 144] 00:27:55.003 bw ( KiB/s): min= 688, max= 1032, per=4.32%, avg=785.70, stdev=77.17, samples=20 00:27:55.003 iops : min= 172, max= 258, avg=196.40, stdev=19.28, samples=20 00:27:55.003 lat (msec) : 50=6.66%, 100=79.40%, 250=13.93% 00:27:55.003 cpu : usr=40.52%, sys=2.42%, ctx=1175, majf=0, minf=1072 00:27:55.003 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89508: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=196, BW=784KiB/s (803kB/s)(7864KiB/10028msec) 00:27:55.003 slat (usec): min=4, max=8033, avg=33.51, stdev=338.13 00:27:55.003 clat (msec): min=30, max=143, avg=81.41, stdev=21.43 00:27:55.003 lat (msec): min=30, max=143, avg=81.44, stdev=21.43 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 35], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 61], 00:27:55.003 | 30.00th=[ 64], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 87], 00:27:55.003 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.003 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.003 | 99.99th=[ 144] 00:27:55.003 bw ( KiB/s): min= 664, max= 896, per=4.28%, avg=779.80, stdev=58.35, samples=20 00:27:55.003 iops : min= 166, max= 224, avg=194.95, stdev=14.59, samples=20 00:27:55.003 lat (msec) : 50=5.95%, 100=79.60%, 250=14.45% 00:27:55.003 cpu : usr=32.88%, sys=1.91%, ctx=968, majf=0, minf=1075 00:27:55.003 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=80.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=87.6%, 8=11.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89509: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=191, BW=766KiB/s (785kB/s)(7708KiB/10059msec) 00:27:55.003 slat (usec): min=6, max=8037, avg=41.41, stdev=426.56 00:27:55.003 clat (msec): min=32, max=157, avg=83.19, stdev=21.35 00:27:55.003 lat (msec): min=32, max=157, avg=83.23, stdev=21.35 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 61], 00:27:55.003 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 92], 00:27:55.003 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.003 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:55.003 | 99.99th=[ 157] 00:27:55.003 bw ( KiB/s): min= 664, max= 1008, per=4.20%, avg=764.40, stdev=82.30, samples=20 00:27:55.003 iops : min= 166, max= 252, avg=191.10, stdev=20.58, samples=20 00:27:55.003 lat (msec) : 50=4.83%, 100=79.92%, 250=15.26% 00:27:55.003 cpu : usr=33.50%, sys=2.06%, ctx=988, majf=0, minf=1075 00:27:55.003 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89510: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=199, BW=800KiB/s (819kB/s)(8020KiB/10026msec) 00:27:55.003 slat (usec): min=5, max=4036, avg=23.82, stdev=155.46 00:27:55.003 clat (msec): min=26, max=143, avg=79.88, stdev=21.22 00:27:55.003 lat (msec): min=26, max=143, avg=79.91, stdev=21.22 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:27:55.003 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 87], 00:27:55.003 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 121], 00:27:55.003 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.003 | 99.99th=[ 144] 00:27:55.003 bw ( KiB/s): min= 664, max= 960, per=4.37%, avg=795.30, stdev=73.91, samples=20 00:27:55.003 iops : min= 166, max= 240, avg=198.80, stdev=18.52, samples=20 00:27:55.003 lat (msec) : 50=6.83%, 100=80.00%, 250=13.17% 00:27:55.003 cpu : usr=37.83%, sys=1.90%, ctx=1065, majf=0, minf=1075 00:27:55.003 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.003 filename1: (groupid=0, jobs=1): err= 0: pid=89511: Sat Jul 13 03:17:00 2024 00:27:55.003 read: IOPS=178, BW=712KiB/s (729kB/s)(7148KiB/10034msec) 00:27:55.003 slat (usec): min=7, max=8057, avg=26.65, stdev=237.13 00:27:55.003 clat (msec): min=29, max=153, avg=89.52, stdev=21.36 00:27:55.003 lat (msec): min=29, max=153, avg=89.55, stdev=21.36 00:27:55.003 clat percentiles (msec): 00:27:55.003 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 70], 00:27:55.003 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 94], 00:27:55.003 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 130], 00:27:55.003 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:27:55.003 | 99.99th=[ 155] 00:27:55.003 bw ( KiB/s): min= 600, max= 1010, per=3.90%, avg=710.70, stdev=96.11, samples=20 00:27:55.003 iops : min= 150, max= 252, avg=177.65, stdev=23.95, samples=20 00:27:55.003 lat (msec) : 50=2.57%, 100=74.26%, 250=23.17% 00:27:55.003 cpu : usr=40.00%, sys=2.34%, ctx=1270, majf=0, minf=1075 00:27:55.003 IO depths : 1=0.1%, 2=3.5%, 4=13.8%, 8=68.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:55.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 complete : 0=0.0%, 4=90.9%, 8=6.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.003 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.003 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename1: (groupid=0, jobs=1): err= 0: pid=89512: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=182, BW=728KiB/s (746kB/s)(7332KiB/10065msec) 00:27:55.004 slat (usec): min=4, max=8034, avg=27.58, stdev=280.92 00:27:55.004 clat (msec): min=8, max=156, avg=87.56, stdev=25.24 00:27:55.004 lat (msec): min=8, max=156, avg=87.58, stdev=25.24 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 61], 20.00th=[ 63], 00:27:55.004 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:27:55.004 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:27:55.004 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:55.004 | 99.99th=[ 157] 00:27:55.004 bw ( KiB/s): min= 512, max= 1272, per=3.99%, avg=726.15, stdev=149.05, samples=20 00:27:55.004 iops : min= 128, max= 318, avg=181.50, stdev=37.25, samples=20 00:27:55.004 lat (msec) : 10=0.76%, 20=0.11%, 50=5.73%, 100=70.59%, 250=22.80% 00:27:55.004 cpu : usr=32.75%, sys=1.86%, ctx=992, majf=0, minf=1075 00:27:55.004 IO depths : 1=0.1%, 2=2.7%, 4=11.0%, 8=71.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89513: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=198, BW=796KiB/s (815kB/s)(7980KiB/10031msec) 00:27:55.004 slat (usec): min=5, max=4036, avg=23.53, stdev=148.68 00:27:55.004 clat (msec): min=31, max=143, avg=80.28, stdev=20.45 00:27:55.004 lat (msec): min=31, max=143, avg=80.30, stdev=20.45 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 61], 00:27:55.004 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 88], 00:27:55.004 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 118], 00:27:55.004 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.004 | 99.99th=[ 144] 00:27:55.004 bw ( KiB/s): min= 664, max= 896, per=4.36%, avg=793.00, stdev=54.47, samples=20 00:27:55.004 iops : min= 166, max= 224, avg=198.25, stdev=13.62, samples=20 00:27:55.004 lat (msec) : 50=4.01%, 100=82.26%, 250=13.73% 00:27:55.004 cpu : usr=40.72%, sys=2.14%, ctx=1278, majf=0, minf=1075 00:27:55.004 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89514: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=186, BW=746KiB/s (764kB/s)(7484KiB/10027msec) 00:27:55.004 slat (usec): min=5, max=8032, avg=34.09, stdev=317.23 00:27:55.004 clat (msec): min=30, max=140, avg=85.45, stdev=19.77 00:27:55.004 lat (msec): min=30, max=140, avg=85.49, stdev=19.78 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 41], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 66], 00:27:55.004 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 90], 00:27:55.004 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 121], 00:27:55.004 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:27:55.004 | 99.99th=[ 142] 00:27:55.004 bw ( KiB/s): min= 640, max= 824, per=4.07%, avg=742.00, stdev=61.63, samples=20 00:27:55.004 iops : min= 160, max= 206, avg=185.50, stdev=15.41, samples=20 00:27:55.004 lat (msec) : 50=2.08%, 100=82.20%, 250=15.71% 00:27:55.004 cpu : usr=44.05%, sys=2.57%, ctx=1307, majf=0, minf=1075 00:27:55.004 IO depths : 1=0.1%, 2=2.5%, 4=9.8%, 8=73.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=89.5%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89515: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=175, BW=704KiB/s (720kB/s)(7076KiB/10058msec) 00:27:55.004 slat (usec): min=8, max=8035, avg=30.12, stdev=330.02 00:27:55.004 clat (msec): min=8, max=155, avg=90.63, stdev=24.47 00:27:55.004 lat (msec): min=8, max=155, avg=90.66, stdev=24.47 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:27:55.004 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 96], 00:27:55.004 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 131], 00:27:55.004 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:55.004 | 99.99th=[ 157] 00:27:55.004 bw ( KiB/s): min= 528, max= 1168, per=3.85%, avg=701.20, stdev=135.85, samples=20 00:27:55.004 iops : min= 132, max= 292, avg=175.30, stdev=33.96, samples=20 00:27:55.004 lat (msec) : 10=0.79%, 20=0.11%, 50=4.92%, 100=64.39%, 250=29.79% 00:27:55.004 cpu : usr=31.78%, sys=1.77%, ctx=956, majf=0, minf=1072 00:27:55.004 IO depths : 1=0.1%, 2=3.3%, 4=13.4%, 8=68.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=91.1%, 8=6.0%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=1769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89516: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=168, BW=673KiB/s (690kB/s)(6752KiB/10027msec) 00:27:55.004 slat (usec): min=5, max=8033, avg=23.09, stdev=218.32 00:27:55.004 clat (msec): min=32, max=180, avg=94.81, stdev=21.61 00:27:55.004 lat (msec): min=32, max=180, avg=94.83, stdev=21.62 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 41], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 84], 00:27:55.004 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 96], 00:27:55.004 | 70.00th=[ 103], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 133], 00:27:55.004 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 180], 00:27:55.004 | 99.99th=[ 180] 00:27:55.004 bw ( KiB/s): min= 507, max= 824, per=3.69%, avg=671.10, stdev=95.10, samples=20 00:27:55.004 iops : min= 126, max= 206, avg=167.70, stdev=23.91, samples=20 00:27:55.004 lat (msec) : 50=1.90%, 100=67.77%, 250=30.33% 00:27:55.004 cpu : usr=40.11%, sys=2.15%, ctx=1202, majf=0, minf=1072 00:27:55.004 IO depths : 1=0.1%, 2=4.6%, 4=18.2%, 8=63.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=92.2%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89517: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=198, BW=796KiB/s (815kB/s)(8000KiB/10052msec) 00:27:55.004 slat (usec): min=5, max=8034, avg=29.65, stdev=310.31 00:27:55.004 clat (msec): min=23, max=144, avg=80.15, stdev=21.41 00:27:55.004 lat (msec): min=23, max=144, avg=80.18, stdev=21.42 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:27:55.004 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 87], 00:27:55.004 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 120], 00:27:55.004 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.004 | 99.99th=[ 144] 00:27:55.004 bw ( KiB/s): min= 664, max= 1040, per=4.38%, avg=796.40, stdev=78.49, samples=20 00:27:55.004 iops : min= 166, max= 260, avg=199.10, stdev=19.62, samples=20 00:27:55.004 lat (msec) : 50=6.95%, 100=80.10%, 250=12.95% 00:27:55.004 cpu : usr=35.13%, sys=2.27%, ctx=974, majf=0, minf=1073 00:27:55.004 IO depths : 1=0.1%, 2=0.1%, 4=0.7%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89518: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=205, BW=821KiB/s (841kB/s)(8272KiB/10075msec) 00:27:55.004 slat (usec): min=5, max=10036, avg=32.11, stdev=297.00 00:27:55.004 clat (msec): min=2, max=150, avg=77.74, stdev=27.96 00:27:55.004 lat (msec): min=2, max=150, avg=77.77, stdev=27.97 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 3], 5.00th=[ 18], 10.00th=[ 39], 20.00th=[ 59], 00:27:55.004 | 30.00th=[ 65], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 89], 00:27:55.004 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 121], 00:27:55.004 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.004 | 99.99th=[ 150] 00:27:55.004 bw ( KiB/s): min= 640, max= 2112, per=4.51%, avg=820.55, stdev=310.56, samples=20 00:27:55.004 iops : min= 160, max= 528, avg=205.10, stdev=77.66, samples=20 00:27:55.004 lat (msec) : 4=1.55%, 10=1.55%, 20=1.98%, 50=7.25%, 100=73.36% 00:27:55.004 lat (msec) : 250=14.31% 00:27:55.004 cpu : usr=38.17%, sys=2.24%, ctx=1746, majf=0, minf=1073 00:27:55.004 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.004 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.004 filename2: (groupid=0, jobs=1): err= 0: pid=89519: Sat Jul 13 03:17:00 2024 00:27:55.004 read: IOPS=173, BW=695KiB/s (712kB/s)(6984KiB/10046msec) 00:27:55.004 slat (usec): min=5, max=9287, avg=28.36, stdev=303.05 00:27:55.004 clat (msec): min=36, max=172, avg=91.73, stdev=22.23 00:27:55.004 lat (msec): min=36, max=172, avg=91.76, stdev=22.24 00:27:55.004 clat percentiles (msec): 00:27:55.004 | 1.00th=[ 41], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 70], 00:27:55.004 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 95], 00:27:55.004 | 70.00th=[ 101], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 128], 00:27:55.004 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:27:55.004 | 99.99th=[ 174] 00:27:55.004 bw ( KiB/s): min= 512, max= 880, per=3.82%, avg=694.40, stdev=101.05, samples=20 00:27:55.004 iops : min= 128, max= 220, avg=173.60, stdev=25.26, samples=20 00:27:55.004 lat (msec) : 50=2.00%, 100=67.93%, 250=30.07% 00:27:55.004 cpu : usr=34.51%, sys=1.98%, ctx=1518, majf=0, minf=1075 00:27:55.004 IO depths : 1=0.1%, 2=3.5%, 4=13.9%, 8=68.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.004 complete : 0=0.0%, 4=90.9%, 8=6.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.005 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.005 filename2: (groupid=0, jobs=1): err= 0: pid=89520: Sat Jul 13 03:17:00 2024 00:27:55.005 read: IOPS=195, BW=780KiB/s (799kB/s)(7828KiB/10031msec) 00:27:55.005 slat (usec): min=4, max=8037, avg=29.07, stdev=313.76 00:27:55.005 clat (msec): min=35, max=142, avg=81.81, stdev=21.51 00:27:55.005 lat (msec): min=35, max=142, avg=81.84, stdev=21.51 00:27:55.005 clat percentiles (msec): 00:27:55.005 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 61], 00:27:55.005 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:27:55.005 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:55.005 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:27:55.005 | 99.99th=[ 144] 00:27:55.005 bw ( KiB/s): min= 664, max= 872, per=4.28%, avg=778.45, stdev=50.34, samples=20 00:27:55.005 iops : min= 166, max= 218, avg=194.55, stdev=12.63, samples=20 00:27:55.005 lat (msec) : 50=5.37%, 100=79.25%, 250=15.38% 00:27:55.005 cpu : usr=31.12%, sys=2.22%, ctx=935, majf=0, minf=1073 00:27:55.005 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=82.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:55.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.005 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.005 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:55.005 00:27:55.005 Run status group 0 (all jobs): 00:27:55.005 READ: bw=17.8MiB/s (18.6MB/s), 653KiB/s-853KiB/s (669kB/s-873kB/s), io=179MiB (188MB), run=10006-10075msec 00:27:55.264 ----------------------------------------------------- 00:27:55.264 Suppressions used: 00:27:55.264 count bytes template 00:27:55.264 45 402 /usr/src/fio/parse.c 00:27:55.264 1 8 libtcmalloc_minimal.so 00:27:55.264 1 904 libcrypto.so 00:27:55.264 ----------------------------------------------------- 00:27:55.264 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.264 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.265 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 bdev_null0 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 [2024-07-13 03:17:01.805333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 bdev_null1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.526 { 00:27:55.526 "params": { 00:27:55.526 "name": "Nvme$subsystem", 00:27:55.526 "trtype": "$TEST_TRANSPORT", 00:27:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.526 "adrfam": "ipv4", 00:27:55.526 "trsvcid": "$NVMF_PORT", 00:27:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.526 "hdgst": ${hdgst:-false}, 00:27:55.526 "ddgst": ${ddgst:-false} 00:27:55.526 }, 00:27:55.526 "method": "bdev_nvme_attach_controller" 00:27:55.526 } 00:27:55.526 EOF 00:27:55.526 )") 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.526 { 00:27:55.526 "params": { 00:27:55.526 "name": "Nvme$subsystem", 00:27:55.526 "trtype": "$TEST_TRANSPORT", 00:27:55.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.526 "adrfam": "ipv4", 00:27:55.526 "trsvcid": "$NVMF_PORT", 00:27:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.526 "hdgst": ${hdgst:-false}, 00:27:55.526 "ddgst": ${ddgst:-false} 00:27:55.526 }, 00:27:55.526 "method": "bdev_nvme_attach_controller" 00:27:55.526 } 00:27:55.526 EOF 00:27:55.526 )") 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:55.526 03:17:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.526 "params": { 00:27:55.526 "name": "Nvme0", 00:27:55.526 "trtype": "tcp", 00:27:55.526 "traddr": "10.0.0.2", 00:27:55.526 "adrfam": "ipv4", 00:27:55.526 "trsvcid": "4420", 00:27:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.526 "hdgst": false, 00:27:55.526 "ddgst": false 00:27:55.526 }, 00:27:55.526 "method": "bdev_nvme_attach_controller" 00:27:55.526 },{ 00:27:55.526 "params": { 00:27:55.526 "name": "Nvme1", 00:27:55.527 "trtype": "tcp", 00:27:55.527 "traddr": "10.0.0.2", 00:27:55.527 "adrfam": "ipv4", 00:27:55.527 "trsvcid": "4420", 00:27:55.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.527 "hdgst": false, 00:27:55.527 "ddgst": false 00:27:55.527 }, 00:27:55.527 "method": "bdev_nvme_attach_controller" 00:27:55.527 }' 00:27:55.527 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:55.527 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:55.527 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:55.527 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:55.527 03:17:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.786 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:55.786 ... 00:27:55.786 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:55.786 ... 00:27:55.786 fio-3.35 00:27:55.786 Starting 4 threads 00:28:02.345 00:28:02.345 filename0: (groupid=0, jobs=1): err= 0: pid=89655: Sat Jul 13 03:17:08 2024 00:28:02.345 read: IOPS=1585, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5001msec) 00:28:02.345 slat (nsec): min=9453, max=78721, avg=20258.40, stdev=6593.17 00:28:02.345 clat (usec): min=1592, max=9480, avg=4974.89, stdev=771.11 00:28:02.345 lat (usec): min=1608, max=9504, avg=4995.14, stdev=770.34 00:28:02.345 clat percentiles (usec): 00:28:02.345 | 1.00th=[ 2507], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4293], 00:28:02.345 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:02.345 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5866], 95.00th=[ 6325], 00:28:02.345 | 99.00th=[ 6783], 99.50th=[ 6849], 99.90th=[ 9110], 99.95th=[ 9110], 00:28:02.345 | 99.99th=[ 9503] 00:28:02.345 bw ( KiB/s): min=11904, max=13392, per=24.24%, avg=12676.67, stdev=634.71, samples=9 00:28:02.345 iops : min= 1488, max= 1674, avg=1584.56, stdev=79.32, samples=9 00:28:02.345 lat (msec) : 2=0.30%, 4=3.54%, 10=96.15% 00:28:02.345 cpu : usr=91.72%, sys=7.14%, ctx=20, majf=0, minf=1073 00:28:02.345 IO depths : 1=0.1%, 2=14.8%, 4=58.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.345 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.345 issued rwts: total=7929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.345 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.345 filename0: (groupid=0, jobs=1): err= 0: pid=89656: Sat Jul 13 03:17:08 2024 00:28:02.345 read: IOPS=1587, BW=12.4MiB/s (13.0MB/s)(62.1MiB/5004msec) 00:28:02.345 slat (nsec): min=8989, max=74416, avg=17229.92, stdev=6341.69 00:28:02.345 clat (usec): min=1630, max=9159, avg=4980.62, stdev=762.05 00:28:02.345 lat (usec): min=1641, max=9190, avg=4997.85, stdev=762.67 00:28:02.345 clat percentiles (usec): 00:28:02.345 | 1.00th=[ 2474], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4359], 00:28:02.345 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:02.345 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5932], 95.00th=[ 6325], 00:28:02.345 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7046], 99.95th=[ 7635], 00:28:02.345 | 99.99th=[ 9110] 00:28:02.345 bw ( KiB/s): min=11904, max=13344, per=24.25%, avg=12684.44, stdev=646.31, samples=9 00:28:02.345 iops : min= 1488, max= 1668, avg=1585.56, stdev=80.79, samples=9 00:28:02.345 lat (msec) : 2=0.11%, 4=3.60%, 10=96.29% 00:28:02.345 cpu : usr=92.62%, sys=6.54%, ctx=7, majf=0, minf=1063 00:28:02.345 IO depths : 1=0.1%, 2=14.8%, 4=58.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.345 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.345 issued rwts: total=7943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.345 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.345 filename1: (groupid=0, jobs=1): err= 0: pid=89657: Sat Jul 13 03:17:08 2024 00:28:02.345 read: IOPS=1586, BW=12.4MiB/s (13.0MB/s)(62.0MiB/5004msec) 00:28:02.345 slat (nsec): min=5445, max=63943, avg=20001.90, stdev=5783.05 00:28:02.345 clat (usec): min=1599, max=9259, avg=4975.37, stdev=760.55 00:28:02.345 lat (usec): min=1615, max=9286, avg=4995.38, stdev=760.41 00:28:02.345 clat percentiles (usec): 00:28:02.345 | 1.00th=[ 2507], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4293], 00:28:02.345 | 30.00th=[ 4686], 40.00th=[ 4883], 50.00th=[ 5080], 60.00th=[ 5145], 00:28:02.345 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5866], 95.00th=[ 6325], 00:28:02.345 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7439], 00:28:02.345 | 99.99th=[ 9241] 00:28:02.346 bw ( KiB/s): min=11904, max=13344, per=24.24%, avg=12676.67, stdev=639.28, samples=9 00:28:02.346 iops : min= 1488, max= 1668, avg=1584.56, stdev=79.89, samples=9 00:28:02.346 lat (msec) : 2=0.23%, 4=3.52%, 10=96.26% 00:28:02.346 cpu : usr=92.18%, sys=6.88%, ctx=6, majf=0, minf=1075 00:28:02.346 IO depths : 1=0.1%, 2=14.8%, 4=58.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.346 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.346 issued rwts: total=7937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.346 filename1: (groupid=0, jobs=1): err= 0: pid=89658: Sat Jul 13 03:17:08 2024 00:28:02.346 read: IOPS=1780, BW=13.9MiB/s (14.6MB/s)(69.6MiB/5002msec) 00:28:02.346 slat (nsec): min=5392, max=76280, avg=17111.89, stdev=6395.33 00:28:02.346 clat (usec): min=925, max=10779, avg=4442.85, stdev=1311.80 00:28:02.346 lat (usec): min=935, max=10801, avg=4459.96, stdev=1312.20 00:28:02.346 clat percentiles (usec): 00:28:02.346 | 1.00th=[ 1696], 5.00th=[ 1729], 10.00th=[ 1827], 20.00th=[ 3884], 00:28:02.346 | 30.00th=[ 4080], 40.00th=[ 4359], 50.00th=[ 4686], 60.00th=[ 4817], 00:28:02.346 | 70.00th=[ 5014], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6390], 00:28:02.346 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 8979], 99.95th=[10552], 00:28:02.346 | 99.99th=[10814] 00:28:02.346 bw ( KiB/s): min=11888, max=16720, per=27.55%, avg=14412.44, stdev=1813.97, samples=9 00:28:02.346 iops : min= 1486, max= 2090, avg=1801.56, stdev=226.75, samples=9 00:28:02.346 lat (usec) : 1000=0.18% 00:28:02.346 lat (msec) : 2=12.61%, 4=8.58%, 10=78.54%, 20=0.09% 00:28:02.346 cpu : usr=92.10%, sys=6.90%, ctx=6, majf=0, minf=1074 00:28:02.346 IO depths : 1=0.1%, 2=5.5%, 4=62.8%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.346 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.346 issued rwts: total=8907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.346 00:28:02.346 Run status group 0 (all jobs): 00:28:02.346 READ: bw=51.1MiB/s (53.6MB/s), 12.4MiB/s-13.9MiB/s (13.0MB/s-14.6MB/s), io=256MiB (268MB), run=5001-5004msec 00:28:02.911 ----------------------------------------------------- 00:28:02.911 Suppressions used: 00:28:02.911 count bytes template 00:28:02.911 6 52 /usr/src/fio/parse.c 00:28:02.911 1 8 libtcmalloc_minimal.so 00:28:02.911 1 904 libcrypto.so 00:28:02.911 ----------------------------------------------------- 00:28:02.911 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 00:28:02.911 real 0m27.892s 00:28:02.911 user 2m7.364s 00:28:02.911 sys 0m8.853s 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.911 ************************************ 00:28:02.911 END TEST fio_dif_rand_params 00:28:02.911 03:17:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 ************************************ 00:28:02.911 03:17:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:02.911 03:17:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:02.911 03:17:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.911 03:17:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 ************************************ 00:28:02.911 START TEST fio_dif_digest 00:28:02.911 ************************************ 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 bdev_null0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.911 [2024-07-13 03:17:09.361918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.911 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.912 { 00:28:02.912 "params": { 00:28:02.912 "name": "Nvme$subsystem", 00:28:02.912 "trtype": "$TEST_TRANSPORT", 00:28:02.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.912 "adrfam": "ipv4", 00:28:02.912 "trsvcid": "$NVMF_PORT", 00:28:02.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.912 "hdgst": ${hdgst:-false}, 00:28:02.912 "ddgst": ${ddgst:-false} 00:28:02.912 }, 00:28:02.912 "method": "bdev_nvme_attach_controller" 00:28:02.912 } 00:28:02.912 EOF 00:28:02.912 )") 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.912 "params": { 00:28:02.912 "name": "Nvme0", 00:28:02.912 "trtype": "tcp", 00:28:02.912 "traddr": "10.0.0.2", 00:28:02.912 "adrfam": "ipv4", 00:28:02.912 "trsvcid": "4420", 00:28:02.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.912 "hdgst": true, 00:28:02.912 "ddgst": true 00:28:02.912 }, 00:28:02.912 "method": "bdev_nvme_attach_controller" 00:28:02.912 }' 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:02.912 03:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.169 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:03.169 ... 00:28:03.169 fio-3.35 00:28:03.169 Starting 3 threads 00:28:15.398 00:28:15.398 filename0: (groupid=0, jobs=1): err= 0: pid=89764: Sat Jul 13 03:17:20 2024 00:28:15.398 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(219MiB/10003msec) 00:28:15.398 slat (nsec): min=5436, max=63873, avg=20949.24, stdev=7618.83 00:28:15.398 clat (usec): min=14386, max=24950, avg=17080.47, stdev=864.12 00:28:15.398 lat (usec): min=14401, max=24988, avg=17101.42, stdev=864.74 00:28:15.398 clat percentiles (usec): 00:28:15.398 | 1.00th=[14746], 5.00th=[15926], 10.00th=[16188], 20.00th=[16450], 00:28:15.398 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:28:15.398 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:28:15.398 | 99.00th=[19006], 99.50th=[19006], 99.90th=[25035], 99.95th=[25035], 00:28:15.398 | 99.99th=[25035] 00:28:15.398 bw ( KiB/s): min=21504, max=23040, per=33.16%, avg=22312.42, stdev=477.13, samples=19 00:28:15.398 iops : min= 168, max= 180, avg=174.32, stdev= 3.73, samples=19 00:28:15.398 lat (msec) : 20=99.83%, 50=0.17% 00:28:15.398 cpu : usr=91.93%, sys=7.43%, ctx=9, majf=0, minf=1075 00:28:15.398 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.398 filename0: (groupid=0, jobs=1): err= 0: pid=89765: Sat Jul 13 03:17:20 2024 00:28:15.398 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(219MiB/10002msec) 00:28:15.398 slat (nsec): min=5483, max=67792, avg=21553.85, stdev=7997.90 00:28:15.398 clat (usec): min=14372, max=23826, avg=17076.58, stdev=847.32 00:28:15.398 lat (usec): min=14387, max=23850, avg=17098.14, stdev=847.86 00:28:15.398 clat percentiles (usec): 00:28:15.398 | 1.00th=[14746], 5.00th=[15926], 10.00th=[16188], 20.00th=[16450], 00:28:15.398 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:28:15.398 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:28:15.398 | 99.00th=[19006], 99.50th=[19006], 99.90th=[23725], 99.95th=[23725], 00:28:15.398 | 99.99th=[23725] 00:28:15.398 bw ( KiB/s): min=21504, max=23040, per=33.16%, avg=22314.74, stdev=477.03, samples=19 00:28:15.398 iops : min= 168, max= 180, avg=174.32, stdev= 3.73, samples=19 00:28:15.398 lat (msec) : 20=99.83%, 50=0.17% 00:28:15.398 cpu : usr=92.60%, sys=6.75%, ctx=27, majf=0, minf=1074 00:28:15.398 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.398 filename0: (groupid=0, jobs=1): err= 0: pid=89766: Sat Jul 13 03:17:20 2024 00:28:15.398 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(219MiB/10004msec) 00:28:15.398 slat (nsec): min=4583, max=66720, avg=21674.58, stdev=7983.84 00:28:15.398 clat (usec): min=5579, max=19688, avg=17049.31, stdev=934.84 00:28:15.398 lat (usec): min=5588, max=19708, avg=17070.99, stdev=935.44 00:28:15.398 clat percentiles (usec): 00:28:15.398 | 1.00th=[14615], 5.00th=[15926], 10.00th=[16188], 20.00th=[16450], 00:28:15.398 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:28:15.398 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:28:15.398 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19792], 99.95th=[19792], 00:28:15.398 | 99.99th=[19792] 00:28:15.398 bw ( KiB/s): min=21504, max=23808, per=33.16%, avg=22312.42, stdev=598.94, samples=19 00:28:15.398 iops : min= 168, max= 186, avg=174.32, stdev= 4.68, samples=19 00:28:15.398 lat (msec) : 10=0.17%, 20=99.83% 00:28:15.398 cpu : usr=92.28%, sys=7.07%, ctx=15, majf=0, minf=1073 00:28:15.398 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.398 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.398 00:28:15.398 Run status group 0 (all jobs): 00:28:15.398 READ: bw=65.7MiB/s (68.9MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=657MiB (689MB), run=10002-10004msec 00:28:15.398 ----------------------------------------------------- 00:28:15.398 Suppressions used: 00:28:15.398 count bytes template 00:28:15.398 5 44 /usr/src/fio/parse.c 00:28:15.398 1 8 libtcmalloc_minimal.so 00:28:15.398 1 904 libcrypto.so 00:28:15.398 ----------------------------------------------------- 00:28:15.398 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.398 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.398 00:28:15.398 real 0m12.314s 00:28:15.398 user 0m29.569s 00:28:15.398 sys 0m2.487s 00:28:15.399 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.399 03:17:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.399 ************************************ 00:28:15.399 END TEST fio_dif_digest 00:28:15.399 ************************************ 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:15.399 03:17:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:15.399 03:17:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.399 rmmod nvme_tcp 00:28:15.399 rmmod nvme_fabrics 00:28:15.399 rmmod nvme_keyring 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 89011 ']' 00:28:15.399 03:17:21 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 89011 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 89011 ']' 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 89011 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89011 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89011' 00:28:15.399 killing process with pid 89011 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@967 -- # kill 89011 00:28:15.399 03:17:21 nvmf_dif -- common/autotest_common.sh@972 -- # wait 89011 00:28:16.336 03:17:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:16.336 03:17:22 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:16.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:16.853 Waiting for block devices as requested 00:28:16.853 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.853 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.853 03:17:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:16.853 03:17:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.853 03:17:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:16.853 00:28:16.853 real 1m9.008s 00:28:16.853 user 4m5.729s 00:28:16.853 sys 0m19.642s 00:28:16.853 03:17:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.853 03:17:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:16.853 ************************************ 00:28:16.853 END TEST nvmf_dif 00:28:16.853 ************************************ 00:28:17.112 03:17:23 -- common/autotest_common.sh@1142 -- # return 0 00:28:17.112 03:17:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.112 03:17:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:17.112 03:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.112 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:28:17.112 ************************************ 00:28:17.112 START TEST nvmf_abort_qd_sizes 00:28:17.112 ************************************ 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.112 * Looking for test storage... 00:28:17.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.112 03:17:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:17.113 Cannot find device "nvmf_tgt_br" 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:17.113 Cannot find device "nvmf_tgt_br2" 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:17.113 Cannot find device "nvmf_tgt_br" 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:17.113 Cannot find device "nvmf_tgt_br2" 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:28:17.113 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:17.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:17.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:17.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:28:17.372 00:28:17.372 --- 10.0.0.2 ping statistics --- 00:28:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.372 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:17.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:17.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:28:17.372 00:28:17.372 --- 10.0.0.3 ping statistics --- 00:28:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.372 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:17.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:28:17.372 00:28:17.372 --- 10.0.0.1 ping statistics --- 00:28:17.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.372 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:17.372 03:17:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:18.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.308 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=90375 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 90375 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 90375 ']' 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.309 03:17:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:18.566 [2024-07-13 03:17:24.834746] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:18.566 [2024-07-13 03:17:24.834949] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.566 [2024-07-13 03:17:25.011450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.825 [2024-07-13 03:17:25.246561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.825 [2024-07-13 03:17:25.246659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.825 [2024-07-13 03:17:25.246680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.825 [2024-07-13 03:17:25.246699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.825 [2024-07-13 03:17:25.246716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.825 [2024-07-13 03:17:25.246970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.825 [2024-07-13 03:17:25.247256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.825 [2024-07-13 03:17:25.247466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.825 [2024-07-13 03:17:25.247474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.085 [2024-07-13 03:17:25.428528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.350 03:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.350 ************************************ 00:28:19.350 START TEST spdk_target_abort 00:28:19.350 ************************************ 00:28:19.350 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:19.350 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:19.350 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:19.350 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.350 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.614 spdk_targetn1 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.614 [2024-07-13 03:17:25.918024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.614 [2024-07-13 03:17:25.955421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.614 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.615 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:19.615 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:19.615 03:17:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:22.896 Initializing NVMe Controllers 00:28:22.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:22.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:22.896 Initialization complete. Launching workers. 00:28:22.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8784, failed: 0 00:28:22.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1057, failed to submit 7727 00:28:22.896 success 783, unsuccess 274, failed 0 00:28:22.896 03:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:22.896 03:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.106 Initializing NVMe Controllers 00:28:27.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.106 Initialization complete. Launching workers. 00:28:27.106 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:28:27.106 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 7718 00:28:27.106 success 388, unsuccess 774, failed 0 00:28:27.106 03:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.106 03:17:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.667 Initializing NVMe Controllers 00:28:29.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:29.667 Initialization complete. Launching workers. 00:28:29.667 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27831, failed: 0 00:28:29.667 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2250, failed to submit 25581 00:28:29.667 success 364, unsuccess 1886, failed 0 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.667 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90375 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 90375 ']' 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 90375 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90375 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.925 killing process with pid 90375 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90375' 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 90375 00:28:29.925 03:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 90375 00:28:30.860 00:28:30.860 real 0m11.466s 00:28:30.861 user 0m44.393s 00:28:30.861 sys 0m2.518s 00:28:30.861 03:17:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.861 03:17:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:30.861 ************************************ 00:28:30.861 END TEST spdk_target_abort 00:28:30.861 ************************************ 00:28:30.861 03:17:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:30.861 03:17:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:30.861 03:17:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:30.861 03:17:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.861 03:17:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.119 ************************************ 00:28:31.119 START TEST kernel_target_abort 00:28:31.119 ************************************ 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:31.119 03:17:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:31.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.377 Waiting for block devices as requested 00:28:31.377 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:31.636 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:31.895 No valid GPT data, bailing 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:31.895 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:32.155 No valid GPT data, bailing 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:32.155 No valid GPT data, bailing 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:32.155 No valid GPT data, bailing 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 --hostid=f622eda1-fcfe-4e16-bc81-0757da055208 -a 10.0.0.1 -t tcp -s 4420 00:28:32.155 00:28:32.155 Discovery Log Number of Records 2, Generation counter 2 00:28:32.155 =====Discovery Log Entry 0====== 00:28:32.155 trtype: tcp 00:28:32.155 adrfam: ipv4 00:28:32.155 subtype: current discovery subsystem 00:28:32.155 treq: not specified, sq flow control disable supported 00:28:32.155 portid: 1 00:28:32.155 trsvcid: 4420 00:28:32.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:32.155 traddr: 10.0.0.1 00:28:32.155 eflags: none 00:28:32.155 sectype: none 00:28:32.155 =====Discovery Log Entry 1====== 00:28:32.155 trtype: tcp 00:28:32.155 adrfam: ipv4 00:28:32.155 subtype: nvme subsystem 00:28:32.155 treq: not specified, sq flow control disable supported 00:28:32.155 portid: 1 00:28:32.155 trsvcid: 4420 00:28:32.155 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:32.155 traddr: 10.0.0.1 00:28:32.155 eflags: none 00:28:32.155 sectype: none 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:32.155 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:32.414 03:17:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.704 Initializing NVMe Controllers 00:28:35.704 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.704 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.704 Initialization complete. Launching workers. 00:28:35.704 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24280, failed: 0 00:28:35.704 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24280, failed to submit 0 00:28:35.704 success 0, unsuccess 24280, failed 0 00:28:35.704 03:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.704 03:17:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.988 Initializing NVMe Controllers 00:28:38.988 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:38.988 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:38.988 Initialization complete. Launching workers. 00:28:38.988 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57165, failed: 0 00:28:38.988 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24420, failed to submit 32745 00:28:38.988 success 0, unsuccess 24420, failed 0 00:28:38.988 03:17:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.988 03:17:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:42.306 Initializing NVMe Controllers 00:28:42.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:42.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:42.306 Initialization complete. Launching workers. 00:28:42.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59169, failed: 0 00:28:42.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14776, failed to submit 44393 00:28:42.306 success 0, unsuccess 14776, failed 0 00:28:42.306 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:42.307 03:17:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:42.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.436 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:43.436 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:43.694 00:28:43.694 real 0m12.615s 00:28:43.694 user 0m6.557s 00:28:43.694 sys 0m3.740s 00:28:43.694 03:17:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.694 03:17:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.694 ************************************ 00:28:43.694 END TEST kernel_target_abort 00:28:43.694 ************************************ 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.694 rmmod nvme_tcp 00:28:43.694 rmmod nvme_fabrics 00:28:43.694 rmmod nvme_keyring 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 90375 ']' 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 90375 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 90375 ']' 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 90375 00:28:43.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90375) - No such process 00:28:43.694 Process with pid 90375 is not found 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 90375 is not found' 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:43.694 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:43.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.211 Waiting for block devices as requested 00:28:44.211 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.211 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.211 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.469 03:17:50 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:44.469 00:28:44.469 real 0m27.328s 00:28:44.469 user 0m52.146s 00:28:44.469 sys 0m7.560s 00:28:44.469 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.469 03:17:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:44.469 ************************************ 00:28:44.469 END TEST nvmf_abort_qd_sizes 00:28:44.469 ************************************ 00:28:44.469 03:17:50 -- common/autotest_common.sh@1142 -- # return 0 00:28:44.469 03:17:50 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:44.469 03:17:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:44.469 03:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.469 03:17:50 -- common/autotest_common.sh@10 -- # set +x 00:28:44.469 ************************************ 00:28:44.469 START TEST keyring_file 00:28:44.469 ************************************ 00:28:44.469 03:17:50 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:44.469 * Looking for test storage... 00:28:44.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:44.469 03:17:50 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.469 03:17:50 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.469 03:17:50 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.469 03:17:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.469 03:17:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.469 03:17:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.469 03:17:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:44.469 03:17:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Z6snd5cRzx 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:44.469 03:17:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Z6snd5cRzx 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Z6snd5cRzx 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Z6snd5cRzx 00:28:44.469 03:17:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:44.469 03:17:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:44.470 03:17:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:44.470 03:17:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:44.470 03:17:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:44.470 03:17:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HmFWYZNyuu 00:28:44.470 03:17:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:44.470 03:17:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:44.727 03:17:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HmFWYZNyuu 00:28:44.727 03:17:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HmFWYZNyuu 00:28:44.727 03:17:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HmFWYZNyuu 00:28:44.727 03:17:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=91371 00:28:44.727 03:17:51 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.727 03:17:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91371 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91371 ']' 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.727 03:17:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:44.727 [2024-07-13 03:17:51.142755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:44.727 [2024-07-13 03:17:51.142955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91371 ] 00:28:44.985 [2024-07-13 03:17:51.316725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.243 [2024-07-13 03:17:51.550618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.500 [2024-07-13 03:17:51.744946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:46.067 03:17:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.067 03:17:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:46.067 03:17:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:46.067 03:17:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.067 03:17:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.067 [2024-07-13 03:17:52.354344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.067 null0 00:28:46.067 [2024-07-13 03:17:52.386327] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:46.068 [2024-07-13 03:17:52.386817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:46.068 [2024-07-13 03:17:52.394283] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.068 03:17:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.068 [2024-07-13 03:17:52.406305] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:46.068 request: 00:28:46.068 { 00:28:46.068 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.068 "secure_channel": false, 00:28:46.068 "listen_address": { 00:28:46.068 "trtype": "tcp", 00:28:46.068 "traddr": "127.0.0.1", 00:28:46.068 "trsvcid": "4420" 00:28:46.068 }, 00:28:46.068 "method": "nvmf_subsystem_add_listener", 00:28:46.068 "req_id": 1 00:28:46.068 } 00:28:46.068 Got JSON-RPC error response 00:28:46.068 response: 00:28:46.068 { 00:28:46.068 "code": -32602, 00:28:46.068 "message": "Invalid parameters" 00:28:46.068 } 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.068 03:17:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=91394 00:28:46.068 03:17:52 keyring_file -- keyring/file.sh@48 -- # waitforlisten 91394 /var/tmp/bperf.sock 00:28:46.068 03:17:52 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91394 ']' 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.068 03:17:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.068 [2024-07-13 03:17:52.522198] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:46.068 [2024-07-13 03:17:52.522434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91394 ] 00:28:46.327 [2024-07-13 03:17:52.697303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.584 [2024-07-13 03:17:52.922492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.842 [2024-07-13 03:17:53.109301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:47.100 03:17:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.100 03:17:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:47.100 03:17:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:47.100 03:17:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:47.358 03:17:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HmFWYZNyuu 00:28:47.358 03:17:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HmFWYZNyuu 00:28:47.616 03:17:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:47.616 03:17:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:47.616 03:17:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:47.616 03:17:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.616 03:17:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.874 03:17:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Z6snd5cRzx == \/\t\m\p\/\t\m\p\.\Z\6\s\n\d\5\c\R\z\x ]] 00:28:47.874 03:17:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:47.874 03:17:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:47.874 03:17:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:47.874 03:17:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.874 03:17:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.132 03:17:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HmFWYZNyuu == \/\t\m\p\/\t\m\p\.\H\m\F\W\Y\Z\N\y\u\u ]] 00:28:48.132 03:17:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:48.132 03:17:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.132 03:17:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:48.132 03:17:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.132 03:17:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.132 03:17:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:48.389 03:17:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:48.389 03:17:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:48.389 03:17:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.389 03:17:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:48.389 03:17:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.389 03:17:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.389 03:17:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:48.647 03:17:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:48.648 03:17:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.648 03:17:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.906 [2024-07-13 03:17:55.296674] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:48.906 nvme0n1 00:28:49.174 03:17:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:49.174 03:17:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.174 03:17:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.174 03:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.174 03:17:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.174 03:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.432 03:17:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:49.432 03:17:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:49.432 03:17:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:49.432 03:17:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.432 03:17:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.432 03:17:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.432 03:17:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:49.690 03:17:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:49.690 03:17:55 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.690 Running I/O for 1 seconds... 00:28:50.624 00:28:50.624 Latency(us) 00:28:50.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.624 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:50.624 nvme0n1 : 1.01 8399.65 32.81 0.00 0.00 15158.78 9711.24 30265.72 00:28:50.624 =================================================================================================================== 00:28:50.624 Total : 8399.65 32.81 0.00 0.00 15158.78 9711.24 30265.72 00:28:50.624 0 00:28:50.624 03:17:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:50.624 03:17:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:50.883 03:17:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:50.883 03:17:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:50.883 03:17:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:50.883 03:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.883 03:17:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.883 03:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.450 03:17:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:51.450 03:17:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:51.450 03:17:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:51.450 03:17:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.450 03:17:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:51.450 03:17:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:51.709 [2024-07-13 03:17:58.074810] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:51.709 [2024-07-13 03:17:58.075201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:28:51.709 [2024-07-13 03:17:58.076120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:28:51.709 [2024-07-13 03:17:58.077116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:51.709 [2024-07-13 03:17:58.077153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:51.709 [2024-07-13 03:17:58.077169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:51.709 request: 00:28:51.709 { 00:28:51.709 "name": "nvme0", 00:28:51.709 "trtype": "tcp", 00:28:51.709 "traddr": "127.0.0.1", 00:28:51.709 "adrfam": "ipv4", 00:28:51.709 "trsvcid": "4420", 00:28:51.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.709 "prchk_reftag": false, 00:28:51.709 "prchk_guard": false, 00:28:51.709 "hdgst": false, 00:28:51.709 "ddgst": false, 00:28:51.709 "psk": "key1", 00:28:51.709 "method": "bdev_nvme_attach_controller", 00:28:51.709 "req_id": 1 00:28:51.709 } 00:28:51.709 Got JSON-RPC error response 00:28:51.709 response: 00:28:51.709 { 00:28:51.709 "code": -5, 00:28:51.709 "message": "Input/output error" 00:28:51.709 } 00:28:51.709 03:17:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:51.709 03:17:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.709 03:17:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.709 03:17:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.709 03:17:58 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:51.709 03:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:51.709 03:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.709 03:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.709 03:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.709 03:17:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.967 03:17:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:51.968 03:17:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:51.968 03:17:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:51.968 03:17:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.968 03:17:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.968 03:17:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.968 03:17:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.226 03:17:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:52.226 03:17:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:52.226 03:17:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:52.485 03:17:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:52.485 03:17:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:52.744 03:17:59 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:52.744 03:17:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:52.744 03:17:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.002 03:17:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:53.002 03:17:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Z6snd5cRzx 00:28:53.002 03:17:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.002 03:17:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.002 03:17:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.261 [2024-07-13 03:17:59.659344] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Z6snd5cRzx': 0100660 00:28:53.261 [2024-07-13 03:17:59.659418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:53.261 request: 00:28:53.261 { 00:28:53.261 "name": "key0", 00:28:53.261 "path": "/tmp/tmp.Z6snd5cRzx", 00:28:53.261 "method": "keyring_file_add_key", 00:28:53.261 "req_id": 1 00:28:53.261 } 00:28:53.261 Got JSON-RPC error response 00:28:53.261 response: 00:28:53.261 { 00:28:53.261 "code": -1, 00:28:53.261 "message": "Operation not permitted" 00:28:53.261 } 00:28:53.261 03:17:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:53.261 03:17:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:53.261 03:17:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:53.261 03:17:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:53.261 03:17:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Z6snd5cRzx 00:28:53.261 03:17:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.261 03:17:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Z6snd5cRzx 00:28:53.520 03:17:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Z6snd5cRzx 00:28:53.520 03:17:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:53.520 03:17:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.520 03:17:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.520 03:17:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.520 03:17:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.520 03:17:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.778 03:18:00 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:53.778 03:18:00 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.778 03:18:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.778 03:18:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.037 [2024-07-13 03:18:00.487881] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Z6snd5cRzx': No such file or directory 00:28:54.037 [2024-07-13 03:18:00.488001] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:54.037 [2024-07-13 03:18:00.488044] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:54.037 [2024-07-13 03:18:00.488060] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:54.037 [2024-07-13 03:18:00.488075] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:54.037 request: 00:28:54.037 { 00:28:54.037 "name": "nvme0", 00:28:54.037 "trtype": "tcp", 00:28:54.037 "traddr": "127.0.0.1", 00:28:54.037 "adrfam": "ipv4", 00:28:54.037 "trsvcid": "4420", 00:28:54.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:54.037 "prchk_reftag": false, 00:28:54.037 "prchk_guard": false, 00:28:54.037 "hdgst": false, 00:28:54.037 "ddgst": false, 00:28:54.037 "psk": "key0", 00:28:54.037 "method": "bdev_nvme_attach_controller", 00:28:54.037 "req_id": 1 00:28:54.037 } 00:28:54.037 Got JSON-RPC error response 00:28:54.037 response: 00:28:54.037 { 00:28:54.037 "code": -19, 00:28:54.037 "message": "No such device" 00:28:54.037 } 00:28:54.037 03:18:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:54.037 03:18:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:54.037 03:18:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:54.037 03:18:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:54.037 03:18:00 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:54.037 03:18:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:54.295 03:18:00 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4JR2MVczvn 00:28:54.295 03:18:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:54.295 03:18:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:54.553 03:18:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4JR2MVczvn 00:28:54.553 03:18:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4JR2MVczvn 00:28:54.553 03:18:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4JR2MVczvn 00:28:54.553 03:18:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4JR2MVczvn 00:28:54.553 03:18:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4JR2MVczvn 00:28:54.553 03:18:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.553 03:18:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:55.120 nvme0n1 00:28:55.120 03:18:01 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:55.120 03:18:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:55.120 03:18:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.120 03:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.120 03:18:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.120 03:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.380 03:18:01 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:55.380 03:18:01 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:55.380 03:18:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:55.380 03:18:01 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:55.638 03:18:01 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:55.638 03:18:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.638 03:18:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.638 03:18:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.638 03:18:02 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:55.638 03:18:02 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:55.638 03:18:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:55.638 03:18:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.638 03:18:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.638 03:18:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.638 03:18:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.896 03:18:02 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:55.896 03:18:02 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:55.896 03:18:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:56.153 03:18:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:56.153 03:18:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:56.153 03:18:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.412 03:18:02 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:56.412 03:18:02 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4JR2MVczvn 00:28:56.412 03:18:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4JR2MVczvn 00:28:56.670 03:18:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HmFWYZNyuu 00:28:56.670 03:18:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HmFWYZNyuu 00:28:56.927 03:18:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:56.927 03:18:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:57.185 nvme0n1 00:28:57.185 03:18:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:57.185 03:18:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:57.443 03:18:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:57.443 "subsystems": [ 00:28:57.443 { 00:28:57.443 "subsystem": "keyring", 00:28:57.443 "config": [ 00:28:57.443 { 00:28:57.443 "method": "keyring_file_add_key", 00:28:57.443 "params": { 00:28:57.443 "name": "key0", 00:28:57.443 "path": "/tmp/tmp.4JR2MVczvn" 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "keyring_file_add_key", 00:28:57.443 "params": { 00:28:57.443 "name": "key1", 00:28:57.443 "path": "/tmp/tmp.HmFWYZNyuu" 00:28:57.443 } 00:28:57.443 } 00:28:57.443 ] 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "subsystem": "iobuf", 00:28:57.443 "config": [ 00:28:57.443 { 00:28:57.443 "method": "iobuf_set_options", 00:28:57.443 "params": { 00:28:57.443 "small_pool_count": 8192, 00:28:57.443 "large_pool_count": 1024, 00:28:57.443 "small_bufsize": 8192, 00:28:57.443 "large_bufsize": 135168 00:28:57.443 } 00:28:57.443 } 00:28:57.443 ] 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "subsystem": "sock", 00:28:57.443 "config": [ 00:28:57.443 { 00:28:57.443 "method": "sock_set_default_impl", 00:28:57.443 "params": { 00:28:57.443 "impl_name": "uring" 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "sock_impl_set_options", 00:28:57.443 "params": { 00:28:57.443 "impl_name": "ssl", 00:28:57.443 "recv_buf_size": 4096, 00:28:57.443 "send_buf_size": 4096, 00:28:57.443 "enable_recv_pipe": true, 00:28:57.443 "enable_quickack": false, 00:28:57.443 "enable_placement_id": 0, 00:28:57.443 "enable_zerocopy_send_server": true, 00:28:57.443 "enable_zerocopy_send_client": false, 00:28:57.443 "zerocopy_threshold": 0, 00:28:57.443 "tls_version": 0, 00:28:57.443 "enable_ktls": false 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "sock_impl_set_options", 00:28:57.443 "params": { 00:28:57.443 "impl_name": "posix", 00:28:57.443 "recv_buf_size": 2097152, 00:28:57.443 "send_buf_size": 2097152, 00:28:57.443 "enable_recv_pipe": true, 00:28:57.443 "enable_quickack": false, 00:28:57.443 "enable_placement_id": 0, 00:28:57.443 "enable_zerocopy_send_server": true, 00:28:57.443 "enable_zerocopy_send_client": false, 00:28:57.443 "zerocopy_threshold": 0, 00:28:57.443 "tls_version": 0, 00:28:57.443 "enable_ktls": false 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "sock_impl_set_options", 00:28:57.443 "params": { 00:28:57.443 "impl_name": "uring", 00:28:57.443 "recv_buf_size": 2097152, 00:28:57.443 "send_buf_size": 2097152, 00:28:57.443 "enable_recv_pipe": true, 00:28:57.443 "enable_quickack": false, 00:28:57.443 "enable_placement_id": 0, 00:28:57.443 "enable_zerocopy_send_server": false, 00:28:57.443 "enable_zerocopy_send_client": false, 00:28:57.443 "zerocopy_threshold": 0, 00:28:57.443 "tls_version": 0, 00:28:57.443 "enable_ktls": false 00:28:57.443 } 00:28:57.443 } 00:28:57.443 ] 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "subsystem": "vmd", 00:28:57.443 "config": [] 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "subsystem": "accel", 00:28:57.443 "config": [ 00:28:57.443 { 00:28:57.443 "method": "accel_set_options", 00:28:57.443 "params": { 00:28:57.443 "small_cache_size": 128, 00:28:57.443 "large_cache_size": 16, 00:28:57.443 "task_count": 2048, 00:28:57.443 "sequence_count": 2048, 00:28:57.443 "buf_count": 2048 00:28:57.443 } 00:28:57.443 } 00:28:57.443 ] 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "subsystem": "bdev", 00:28:57.443 "config": [ 00:28:57.443 { 00:28:57.443 "method": "bdev_set_options", 00:28:57.443 "params": { 00:28:57.443 "bdev_io_pool_size": 65535, 00:28:57.443 "bdev_io_cache_size": 256, 00:28:57.443 "bdev_auto_examine": true, 00:28:57.443 "iobuf_small_cache_size": 128, 00:28:57.443 "iobuf_large_cache_size": 16 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "bdev_raid_set_options", 00:28:57.443 "params": { 00:28:57.443 "process_window_size_kb": 1024 00:28:57.443 } 00:28:57.443 }, 00:28:57.443 { 00:28:57.443 "method": "bdev_iscsi_set_options", 00:28:57.443 "params": { 00:28:57.443 "timeout_sec": 30 00:28:57.444 } 00:28:57.444 }, 00:28:57.444 { 00:28:57.444 "method": "bdev_nvme_set_options", 00:28:57.444 "params": { 00:28:57.444 "action_on_timeout": "none", 00:28:57.444 "timeout_us": 0, 00:28:57.444 "timeout_admin_us": 0, 00:28:57.444 "keep_alive_timeout_ms": 10000, 00:28:57.444 "arbitration_burst": 0, 00:28:57.444 "low_priority_weight": 0, 00:28:57.444 "medium_priority_weight": 0, 00:28:57.444 "high_priority_weight": 0, 00:28:57.444 "nvme_adminq_poll_period_us": 10000, 00:28:57.444 "nvme_ioq_poll_period_us": 0, 00:28:57.444 "io_queue_requests": 512, 00:28:57.444 "delay_cmd_submit": true, 00:28:57.444 "transport_retry_count": 4, 00:28:57.444 "bdev_retry_count": 3, 00:28:57.444 "transport_ack_timeout": 0, 00:28:57.444 "ctrlr_loss_timeout_sec": 0, 00:28:57.444 "reconnect_delay_sec": 0, 00:28:57.444 "fast_io_fail_timeout_sec": 0, 00:28:57.444 "disable_auto_failback": false, 00:28:57.444 "generate_uuids": false, 00:28:57.444 "transport_tos": 0, 00:28:57.444 "nvme_error_stat": false, 00:28:57.444 "rdma_srq_size": 0, 00:28:57.444 "io_path_stat": false, 00:28:57.444 "allow_accel_sequence": false, 00:28:57.444 "rdma_max_cq_size": 0, 00:28:57.444 "rdma_cm_event_timeout_ms": 0, 00:28:57.444 "dhchap_digests": [ 00:28:57.444 "sha256", 00:28:57.444 "sha384", 00:28:57.444 "sha512" 00:28:57.444 ], 00:28:57.444 "dhchap_dhgroups": [ 00:28:57.444 "null", 00:28:57.444 "ffdhe2048", 00:28:57.444 "ffdhe3072", 00:28:57.444 "ffdhe4096", 00:28:57.444 "ffdhe6144", 00:28:57.444 "ffdhe8192" 00:28:57.444 ] 00:28:57.444 } 00:28:57.444 }, 00:28:57.444 { 00:28:57.444 "method": "bdev_nvme_attach_controller", 00:28:57.444 "params": { 00:28:57.444 "name": "nvme0", 00:28:57.444 "trtype": "TCP", 00:28:57.444 "adrfam": "IPv4", 00:28:57.444 "traddr": "127.0.0.1", 00:28:57.444 "trsvcid": "4420", 00:28:57.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.444 "prchk_reftag": false, 00:28:57.444 "prchk_guard": false, 00:28:57.444 "ctrlr_loss_timeout_sec": 0, 00:28:57.444 "reconnect_delay_sec": 0, 00:28:57.444 "fast_io_fail_timeout_sec": 0, 00:28:57.444 "psk": "key0", 00:28:57.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:57.444 "hdgst": false, 00:28:57.444 "ddgst": false 00:28:57.444 } 00:28:57.444 }, 00:28:57.444 { 00:28:57.444 "method": "bdev_nvme_set_hotplug", 00:28:57.444 "params": { 00:28:57.444 "period_us": 100000, 00:28:57.444 "enable": false 00:28:57.444 } 00:28:57.444 }, 00:28:57.444 { 00:28:57.444 "method": "bdev_wait_for_examine" 00:28:57.444 } 00:28:57.444 ] 00:28:57.444 }, 00:28:57.444 { 00:28:57.444 "subsystem": "nbd", 00:28:57.444 "config": [] 00:28:57.444 } 00:28:57.444 ] 00:28:57.444 }' 00:28:57.444 03:18:03 keyring_file -- keyring/file.sh@114 -- # killprocess 91394 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91394 ']' 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91394 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91394 00:28:57.444 killing process with pid 91394 00:28:57.444 Received shutdown signal, test time was about 1.000000 seconds 00:28:57.444 00:28:57.444 Latency(us) 00:28:57.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.444 =================================================================================================================== 00:28:57.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91394' 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@967 -- # kill 91394 00:28:57.444 03:18:03 keyring_file -- common/autotest_common.sh@972 -- # wait 91394 00:28:58.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.459 03:18:04 keyring_file -- keyring/file.sh@117 -- # bperfpid=91646 00:28:58.459 03:18:04 keyring_file -- keyring/file.sh@119 -- # waitforlisten 91646 /var/tmp/bperf.sock 00:28:58.459 03:18:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91646 ']' 00:28:58.459 03:18:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.459 03:18:04 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:58.459 "subsystems": [ 00:28:58.459 { 00:28:58.459 "subsystem": "keyring", 00:28:58.460 "config": [ 00:28:58.460 { 00:28:58.460 "method": "keyring_file_add_key", 00:28:58.460 "params": { 00:28:58.460 "name": "key0", 00:28:58.460 "path": "/tmp/tmp.4JR2MVczvn" 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "keyring_file_add_key", 00:28:58.460 "params": { 00:28:58.460 "name": "key1", 00:28:58.460 "path": "/tmp/tmp.HmFWYZNyuu" 00:28:58.460 } 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "iobuf", 00:28:58.460 "config": [ 00:28:58.460 { 00:28:58.460 "method": "iobuf_set_options", 00:28:58.460 "params": { 00:28:58.460 "small_pool_count": 8192, 00:28:58.460 "large_pool_count": 1024, 00:28:58.460 "small_bufsize": 8192, 00:28:58.460 "large_bufsize": 135168 00:28:58.460 } 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "sock", 00:28:58.460 "config": [ 00:28:58.460 { 00:28:58.460 "method": "sock_set_default_impl", 00:28:58.460 "params": { 00:28:58.460 "impl_name": "uring" 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "sock_impl_set_options", 00:28:58.460 "params": { 00:28:58.460 "impl_name": "ssl", 00:28:58.460 "recv_buf_size": 4096, 00:28:58.460 "send_buf_size": 4096, 00:28:58.460 "enable_recv_pipe": true, 00:28:58.460 "enable_quickack": false, 00:28:58.460 "enable_placement_id": 0, 00:28:58.460 "enable_zerocopy_send_server": true, 00:28:58.460 "enable_zerocopy_send_client": false, 00:28:58.460 "zerocopy_threshold": 0, 00:28:58.460 "tls_version": 0, 00:28:58.460 "enable_ktls": false 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "sock_impl_set_options", 00:28:58.460 "params": { 00:28:58.460 "impl_name": "posix", 00:28:58.460 "recv_buf_size": 2097152, 00:28:58.460 "send_buf_size": 2097152, 00:28:58.460 "enable_recv_pipe": true, 00:28:58.460 "enable_quickack": false, 00:28:58.460 "enable_placement_id": 0, 00:28:58.460 "enable_zerocopy_send_server": true, 00:28:58.460 "enable_zerocopy_send_client": false, 00:28:58.460 "zerocopy_threshold": 0, 00:28:58.460 "tls_version": 0, 00:28:58.460 "enable_ktls": false 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "sock_impl_set_options", 00:28:58.460 "params": { 00:28:58.460 "impl_name": "uring", 00:28:58.460 "recv_buf_size": 2097152, 00:28:58.460 "send_buf_size": 2097152, 00:28:58.460 "enable_recv_pipe": true, 00:28:58.460 "enable_quickack": false, 00:28:58.460 "enable_placement_id": 0, 00:28:58.460 "enable_zerocopy_send_server": false, 00:28:58.460 "enable_zerocopy_send_client": false, 00:28:58.460 "zerocopy_threshold": 0, 00:28:58.460 "tls_version": 0, 00:28:58.460 "enable_ktls": false 00:28:58.460 } 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "vmd", 00:28:58.460 "config": [] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "accel", 00:28:58.460 "config": [ 00:28:58.460 { 00:28:58.460 "method": "accel_set_options", 00:28:58.460 "params": { 00:28:58.460 "small_cache_size": 128, 00:28:58.460 "large_cache_size": 16, 00:28:58.460 "task_count": 2048, 00:28:58.460 "sequence_count": 2048, 00:28:58.460 "buf_count": 2048 00:28:58.460 } 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "bdev", 00:28:58.460 "config": [ 00:28:58.460 { 00:28:58.460 "method": "bdev_set_options", 00:28:58.460 "params": { 00:28:58.460 "bdev_io_pool_size": 65535, 00:28:58.460 "bdev_io_cache_size": 256, 00:28:58.460 "bdev_auto_examine": true, 00:28:58.460 "iobuf_small_cache_size": 128, 00:28:58.460 "iobuf_large_cache_size": 16 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_raid_set_options", 00:28:58.460 "params": { 00:28:58.460 "process_window_size_kb": 1024 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_iscsi_set_options", 00:28:58.460 "params": { 00:28:58.460 "timeout_sec": 30 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_nvme_set_options", 00:28:58.460 "params": { 00:28:58.460 "action_on_timeout": "none", 00:28:58.460 "timeout_us": 0, 00:28:58.460 "timeout_admin_us": 0, 00:28:58.460 "keep_alive_timeout_ms": 10000, 00:28:58.460 "arbitration_burst": 0, 00:28:58.460 "low_priority_weight": 0, 00:28:58.460 "medium_priority_weight": 0, 00:28:58.460 "high_priority_weight": 0, 00:28:58.460 "nvme_adminq_poll_period_us": 10000, 00:28:58.460 "nvme_ioq_poll_period_us": 0, 00:28:58.460 "io_queue_requests": 512, 00:28:58.460 "delay_cmd_submit": true, 00:28:58.460 "transport_retry_count": 4, 00:28:58.460 "bdev_retry_count": 3, 00:28:58.460 "transport_ack_timeout": 0, 00:28:58.460 "ctrlr_loss_timeout_sec": 0, 00:28:58.460 "reconnect_delay_sec": 0, 00:28:58.460 "fast_io_fail_timeout_sec": 0, 00:28:58.460 "disable_auto_failback": false, 00:28:58.460 "generate_uuids": false, 00:28:58.460 "transport_tos": 0, 00:28:58.460 "nvme_error_stat": false, 00:28:58.460 "rdma_srq_size": 0, 00:28:58.460 "io_path_stat": false, 00:28:58.460 "allow_accel_sequence": false, 00:28:58.460 "rdma_max_cq_size": 0, 00:28:58.460 "rdma_cm_event_timeout_ms": 0, 00:28:58.460 "dhchap_digests": [ 00:28:58.460 "sha256", 00:28:58.460 "sha384", 00:28:58.460 "sha512" 00:28:58.460 ], 00:28:58.460 "dhchap_dhgroups": [ 00:28:58.460 "null", 00:28:58.460 "ffdhe2048", 00:28:58.460 "ffdhe3072", 00:28:58.460 "ffdhe4096", 00:28:58.460 "ffdhe6144", 00:28:58.460 "ffdhe8192" 00:28:58.460 ] 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_nvme_attach_controller", 00:28:58.460 "params": { 00:28:58.460 "name": "nvme0", 00:28:58.460 "trtype": "TCP", 00:28:58.460 "adrfam": "IPv4", 00:28:58.460 "traddr": "127.0.0.1", 00:28:58.460 "trsvcid": "4420", 00:28:58.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.460 "prchk_reftag": false, 00:28:58.460 "prchk_guard": false, 00:28:58.460 "ctrlr_loss_timeout_sec": 0, 00:28:58.460 "reconnect_delay_sec": 0, 00:28:58.460 "fast_io_fail_timeout_sec": 0, 00:28:58.460 "psk": "key0", 00:28:58.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:58.460 "hdgst": false, 00:28:58.460 "ddgst": false 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_nvme_set_hotplug", 00:28:58.460 "params": { 00:28:58.460 "period_us": 100000, 00:28:58.460 "enable": false 00:28:58.460 } 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "method": "bdev_wait_for_examine" 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }, 00:28:58.460 { 00:28:58.460 "subsystem": "nbd", 00:28:58.460 "config": [] 00:28:58.460 } 00:28:58.460 ] 00:28:58.460 }' 00:28:58.460 03:18:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.460 03:18:04 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:58.460 03:18:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.460 03:18:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.460 03:18:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:58.719 [2024-07-13 03:18:04.956296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:58.719 [2024-07-13 03:18:04.956504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91646 ] 00:28:58.719 [2024-07-13 03:18:05.123724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.977 [2024-07-13 03:18:05.334318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.236 [2024-07-13 03:18:05.579701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:59.236 [2024-07-13 03:18:05.688538] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:59.494 03:18:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.494 03:18:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:59.495 03:18:05 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:59.495 03:18:05 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:59.495 03:18:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.753 03:18:06 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:59.753 03:18:06 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:59.753 03:18:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.753 03:18:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.753 03:18:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.753 03:18:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.753 03:18:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.011 03:18:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:00.011 03:18:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:00.011 03:18:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.011 03:18:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:00.011 03:18:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.011 03:18:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:00.011 03:18:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.270 03:18:06 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:00.270 03:18:06 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:00.270 03:18:06 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:00.270 03:18:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:00.529 03:18:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:00.529 03:18:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:00.529 03:18:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4JR2MVczvn /tmp/tmp.HmFWYZNyuu 00:29:00.529 03:18:07 keyring_file -- keyring/file.sh@20 -- # killprocess 91646 00:29:00.529 03:18:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91646 ']' 00:29:00.529 03:18:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91646 00:29:00.529 03:18:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91646 00:29:00.788 killing process with pid 91646 00:29:00.788 Received shutdown signal, test time was about 1.000000 seconds 00:29:00.788 00:29:00.788 Latency(us) 00:29:00.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.788 =================================================================================================================== 00:29:00.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91646' 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@967 -- # kill 91646 00:29:00.788 03:18:07 keyring_file -- common/autotest_common.sh@972 -- # wait 91646 00:29:01.750 03:18:08 keyring_file -- keyring/file.sh@21 -- # killprocess 91371 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91371 ']' 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91371 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91371 00:29:01.750 killing process with pid 91371 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91371' 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@967 -- # kill 91371 00:29:01.750 [2024-07-13 03:18:08.199690] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:01.750 03:18:08 keyring_file -- common/autotest_common.sh@972 -- # wait 91371 00:29:04.282 00:29:04.282 real 0m19.523s 00:29:04.282 user 0m44.663s 00:29:04.282 sys 0m3.164s 00:29:04.282 03:18:10 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:04.282 03:18:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:04.282 ************************************ 00:29:04.282 END TEST keyring_file 00:29:04.282 ************************************ 00:29:04.282 03:18:10 -- common/autotest_common.sh@1142 -- # return 0 00:29:04.282 03:18:10 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:04.282 03:18:10 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:04.282 03:18:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:04.282 03:18:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.282 03:18:10 -- common/autotest_common.sh@10 -- # set +x 00:29:04.283 ************************************ 00:29:04.283 START TEST keyring_linux 00:29:04.283 ************************************ 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:04.283 * Looking for test storage... 00:29:04.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f622eda1-fcfe-4e16-bc81-0757da055208 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f622eda1-fcfe-4e16-bc81-0757da055208 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:04.283 03:18:10 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.283 03:18:10 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.283 03:18:10 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.283 03:18:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.283 03:18:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.283 03:18:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.283 03:18:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:04.283 03:18:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:04.283 /tmp/:spdk-test:key0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:04.283 03:18:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:04.283 /tmp/:spdk-test:key1 00:29:04.283 03:18:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=91787 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:04.283 03:18:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 91787 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91787 ']' 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.283 03:18:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:04.283 [2024-07-13 03:18:10.735791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:04.283 [2024-07-13 03:18:10.735992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91787 ] 00:29:04.543 [2024-07-13 03:18:10.912874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.802 [2024-07-13 03:18:11.146852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.060 [2024-07-13 03:18:11.326547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.628 [2024-07-13 03:18:11.856579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.628 null0 00:29:05.628 [2024-07-13 03:18:11.888555] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:05.628 [2024-07-13 03:18:11.888875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:05.628 274313053 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:05.628 124691238 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=91807 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:05.628 03:18:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 91807 /var/tmp/bperf.sock 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91807 ']' 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.628 03:18:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.628 [2024-07-13 03:18:12.023810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:05.628 [2024-07-13 03:18:12.024005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91807 ] 00:29:05.887 [2024-07-13 03:18:12.195714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.145 [2024-07-13 03:18:12.422723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.711 03:18:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.711 03:18:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:06.711 03:18:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:06.711 03:18:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:06.969 03:18:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:06.969 03:18:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.227 [2024-07-13 03:18:13.718882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:07.485 03:18:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:07.485 03:18:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:07.743 [2024-07-13 03:18:14.052100] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:07.743 nvme0n1 00:29:07.743 03:18:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:07.743 03:18:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:07.743 03:18:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:07.743 03:18:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:07.743 03:18:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:07.743 03:18:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.000 03:18:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:08.000 03:18:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:08.000 03:18:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:08.000 03:18:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:08.000 03:18:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.000 03:18:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:08.000 03:18:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@25 -- # sn=274313053 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 274313053 == \2\7\4\3\1\3\0\5\3 ]] 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 274313053 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:08.258 03:18:14 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.516 Running I/O for 1 seconds... 00:29:09.449 00:29:09.449 Latency(us) 00:29:09.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.449 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:09.449 nvme0n1 : 1.01 9891.86 38.64 0.00 0.00 12834.02 4915.20 19660.80 00:29:09.449 =================================================================================================================== 00:29:09.449 Total : 9891.86 38.64 0.00 0.00 12834.02 4915.20 19660.80 00:29:09.449 0 00:29:09.449 03:18:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:09.449 03:18:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:09.707 03:18:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:09.707 03:18:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:09.707 03:18:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:09.707 03:18:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:09.707 03:18:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.707 03:18:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:09.966 03:18:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:09.966 03:18:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:09.966 03:18:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:09.966 03:18:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.966 03:18:16 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:09.966 03:18:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:10.224 [2024-07-13 03:18:16.606175] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:10.224 [2024-07-13 03:18:16.606577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:29:10.224 [2024-07-13 03:18:16.607552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:29:10.224 [2024-07-13 03:18:16.608546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.224 [2024-07-13 03:18:16.608585] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:10.224 [2024-07-13 03:18:16.608631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.224 request: 00:29:10.224 { 00:29:10.224 "name": "nvme0", 00:29:10.224 "trtype": "tcp", 00:29:10.225 "traddr": "127.0.0.1", 00:29:10.225 "adrfam": "ipv4", 00:29:10.225 "trsvcid": "4420", 00:29:10.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.225 "prchk_reftag": false, 00:29:10.225 "prchk_guard": false, 00:29:10.225 "hdgst": false, 00:29:10.225 "ddgst": false, 00:29:10.225 "psk": ":spdk-test:key1", 00:29:10.225 "method": "bdev_nvme_attach_controller", 00:29:10.225 "req_id": 1 00:29:10.225 } 00:29:10.225 Got JSON-RPC error response 00:29:10.225 response: 00:29:10.225 { 00:29:10.225 "code": -5, 00:29:10.225 "message": "Input/output error" 00:29:10.225 } 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@33 -- # sn=274313053 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 274313053 00:29:10.225 1 links removed 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@33 -- # sn=124691238 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 124691238 00:29:10.225 1 links removed 00:29:10.225 03:18:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 91807 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91807 ']' 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91807 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91807 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.225 killing process with pid 91807 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91807' 00:29:10.225 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.225 00:29:10.225 Latency(us) 00:29:10.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.225 =================================================================================================================== 00:29:10.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@967 -- # kill 91807 00:29:10.225 03:18:16 keyring_linux -- common/autotest_common.sh@972 -- # wait 91807 00:29:11.160 03:18:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 91787 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91787 ']' 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91787 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91787 00:29:11.160 killing process with pid 91787 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91787' 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 91787 00:29:11.160 03:18:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 91787 00:29:13.063 ************************************ 00:29:13.063 END TEST keyring_linux 00:29:13.063 ************************************ 00:29:13.063 00:29:13.063 real 0m9.086s 00:29:13.063 user 0m16.296s 00:29:13.063 sys 0m1.622s 00:29:13.063 03:18:19 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.063 03:18:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:13.063 03:18:19 -- common/autotest_common.sh@1142 -- # return 0 00:29:13.063 03:18:19 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:13.063 03:18:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:13.063 03:18:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:13.063 03:18:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:13.063 03:18:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:13.063 03:18:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:13.063 03:18:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:13.063 03:18:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.063 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:29:13.063 03:18:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:13.063 03:18:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:13.063 03:18:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:13.063 03:18:19 -- common/autotest_common.sh@10 -- # set +x 00:29:14.439 INFO: APP EXITING 00:29:14.439 INFO: killing all VMs 00:29:14.439 INFO: killing vhost app 00:29:14.439 INFO: EXIT DONE 00:29:15.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:15.375 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:15.375 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:15.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:15.944 Cleaning 00:29:15.944 Removing: /var/run/dpdk/spdk0/config 00:29:15.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:15.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:15.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:15.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:15.944 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:15.944 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:15.944 Removing: /var/run/dpdk/spdk1/config 00:29:15.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:15.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:15.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:15.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:15.944 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:15.944 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:15.944 Removing: /var/run/dpdk/spdk2/config 00:29:15.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:15.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:15.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:15.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:15.944 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:15.944 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:15.944 Removing: /var/run/dpdk/spdk3/config 00:29:15.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:15.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:15.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:15.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:15.944 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:15.944 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:15.944 Removing: /var/run/dpdk/spdk4/config 00:29:15.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:15.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:15.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:15.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:15.944 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:15.944 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:15.944 Removing: /dev/shm/nvmf_trace.0 00:29:15.944 Removing: /dev/shm/spdk_tgt_trace.pid59459 00:29:15.944 Removing: /var/run/dpdk/spdk0 00:29:15.944 Removing: /var/run/dpdk/spdk1 00:29:15.944 Removing: /var/run/dpdk/spdk2 00:29:15.944 Removing: /var/run/dpdk/spdk3 00:29:15.944 Removing: /var/run/dpdk/spdk4 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59254 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59459 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59669 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59773 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59818 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59946 00:29:15.944 Removing: /var/run/dpdk/spdk_pid59963 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60102 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60300 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60452 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60549 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60637 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60745 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60835 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60874 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60911 00:29:15.944 Removing: /var/run/dpdk/spdk_pid60979 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61085 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61530 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61599 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61662 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61678 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61802 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61826 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61949 00:29:16.203 Removing: /var/run/dpdk/spdk_pid61968 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62028 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62050 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62105 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62123 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62292 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62334 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62414 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62483 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62516 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62583 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62635 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62676 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62717 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62758 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62810 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62851 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62892 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62939 00:29:16.203 Removing: /var/run/dpdk/spdk_pid62980 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63026 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63067 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63114 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63155 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63196 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63237 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63284 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63333 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63377 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63424 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63466 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63548 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63653 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63983 00:29:16.203 Removing: /var/run/dpdk/spdk_pid63997 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64040 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64070 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64093 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64125 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64156 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64184 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64216 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64241 00:29:16.203 Removing: /var/run/dpdk/spdk_pid64269 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64300 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64325 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64353 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64384 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64415 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64437 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64468 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64499 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64526 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64569 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64595 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64636 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64712 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64753 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64780 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64815 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64842 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64867 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64927 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64953 00:29:16.204 Removing: /var/run/dpdk/spdk_pid64993 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65019 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65042 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65063 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65085 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65112 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65139 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65155 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65201 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65234 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65261 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65302 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65323 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65343 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65395 00:29:16.204 Removing: /var/run/dpdk/spdk_pid65419 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65463 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65477 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65502 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65522 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65541 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65566 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65586 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65605 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65686 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65784 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65928 00:29:16.463 Removing: /var/run/dpdk/spdk_pid65976 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66041 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66064 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66098 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66125 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66174 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66196 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66278 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66328 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66401 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66520 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66614 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66667 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66779 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66835 00:29:16.463 Removing: /var/run/dpdk/spdk_pid66885 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67127 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67240 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67286 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67606 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67641 00:29:16.463 Removing: /var/run/dpdk/spdk_pid67955 00:29:16.463 Removing: /var/run/dpdk/spdk_pid68377 00:29:16.463 Removing: /var/run/dpdk/spdk_pid68658 00:29:16.463 Removing: /var/run/dpdk/spdk_pid69480 00:29:16.463 Removing: /var/run/dpdk/spdk_pid70318 00:29:16.463 Removing: /var/run/dpdk/spdk_pid70446 00:29:16.463 Removing: /var/run/dpdk/spdk_pid70526 00:29:16.463 Removing: /var/run/dpdk/spdk_pid71810 00:29:16.463 Removing: /var/run/dpdk/spdk_pid72070 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75357 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75670 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75782 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75922 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75962 00:29:16.463 Removing: /var/run/dpdk/spdk_pid75995 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76025 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76136 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76277 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76452 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76552 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76758 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76860 00:29:16.463 Removing: /var/run/dpdk/spdk_pid76966 00:29:16.463 Removing: /var/run/dpdk/spdk_pid77291 00:29:16.463 Removing: /var/run/dpdk/spdk_pid77652 00:29:16.463 Removing: /var/run/dpdk/spdk_pid77660 00:29:16.463 Removing: /var/run/dpdk/spdk_pid79910 00:29:16.463 Removing: /var/run/dpdk/spdk_pid79913 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80209 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80230 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80249 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80282 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80288 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80378 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80381 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80485 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80499 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80603 00:29:16.463 Removing: /var/run/dpdk/spdk_pid80606 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81002 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81044 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81147 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81225 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81542 00:29:16.463 Removing: /var/run/dpdk/spdk_pid81750 00:29:16.463 Removing: /var/run/dpdk/spdk_pid82147 00:29:16.463 Removing: /var/run/dpdk/spdk_pid82659 00:29:16.463 Removing: /var/run/dpdk/spdk_pid83493 00:29:16.463 Removing: /var/run/dpdk/spdk_pid84097 00:29:16.463 Removing: /var/run/dpdk/spdk_pid84110 00:29:16.463 Removing: /var/run/dpdk/spdk_pid86039 00:29:16.463 Removing: /var/run/dpdk/spdk_pid86107 00:29:16.463 Removing: /var/run/dpdk/spdk_pid86179 00:29:16.463 Removing: /var/run/dpdk/spdk_pid86250 00:29:16.722 Removing: /var/run/dpdk/spdk_pid86391 00:29:16.722 Removing: /var/run/dpdk/spdk_pid86458 00:29:16.722 Removing: /var/run/dpdk/spdk_pid86525 00:29:16.722 Removing: /var/run/dpdk/spdk_pid86592 00:29:16.722 Removing: /var/run/dpdk/spdk_pid86935 00:29:16.722 Removing: /var/run/dpdk/spdk_pid88100 00:29:16.722 Removing: /var/run/dpdk/spdk_pid88258 00:29:16.722 Removing: /var/run/dpdk/spdk_pid88502 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89060 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89219 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89380 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89485 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89641 00:29:16.722 Removing: /var/run/dpdk/spdk_pid89755 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90426 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90458 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90493 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90883 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90914 00:29:16.722 Removing: /var/run/dpdk/spdk_pid90955 00:29:16.722 Removing: /var/run/dpdk/spdk_pid91371 00:29:16.722 Removing: /var/run/dpdk/spdk_pid91394 00:29:16.722 Removing: /var/run/dpdk/spdk_pid91646 00:29:16.722 Removing: /var/run/dpdk/spdk_pid91787 00:29:16.722 Removing: /var/run/dpdk/spdk_pid91807 00:29:16.722 Clean 00:29:16.722 03:18:23 -- common/autotest_common.sh@1451 -- # return 0 00:29:16.722 03:18:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:16.722 03:18:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:16.722 03:18:23 -- common/autotest_common.sh@10 -- # set +x 00:29:16.722 03:18:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:16.722 03:18:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:16.722 03:18:23 -- common/autotest_common.sh@10 -- # set +x 00:29:16.722 03:18:23 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:16.722 03:18:23 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:16.722 03:18:23 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:16.722 03:18:23 -- spdk/autotest.sh@391 -- # hash lcov 00:29:16.722 03:18:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:16.722 03:18:23 -- spdk/autotest.sh@393 -- # hostname 00:29:16.722 03:18:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:16.981 geninfo: WARNING: invalid characters removed from testname! 00:29:43.527 03:18:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.059 03:18:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:49.346 03:18:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.877 03:18:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:54.438 03:19:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.965 03:19:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:00.254 03:19:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:00.254 03:19:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:00.254 03:19:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:00.254 03:19:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.254 03:19:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.254 03:19:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.254 03:19:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.254 03:19:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.254 03:19:06 -- paths/export.sh@5 -- $ export PATH 00:30:00.254 03:19:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.254 03:19:06 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:00.254 03:19:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:00.254 03:19:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720840746.XXXXXX 00:30:00.254 03:19:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720840746.iEJZVz 00:30:00.254 03:19:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:00.254 03:19:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:00.254 03:19:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:00.254 03:19:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:00.254 03:19:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:00.254 03:19:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:00.254 03:19:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:00.254 03:19:06 -- common/autotest_common.sh@10 -- $ set +x 00:30:00.254 03:19:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:30:00.254 03:19:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:00.254 03:19:06 -- pm/common@17 -- $ local monitor 00:30:00.254 03:19:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.254 03:19:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:00.254 03:19:06 -- pm/common@25 -- $ sleep 1 00:30:00.254 03:19:06 -- pm/common@21 -- $ date +%s 00:30:00.254 03:19:06 -- pm/common@21 -- $ date +%s 00:30:00.254 03:19:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720840746 00:30:00.254 03:19:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720840746 00:30:00.254 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720840746_collect-vmstat.pm.log 00:30:00.254 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720840746_collect-cpu-load.pm.log 00:30:00.822 03:19:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:00.822 03:19:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:00.822 03:19:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:00.822 03:19:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:00.822 03:19:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:00.822 03:19:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:00.822 03:19:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:00.822 03:19:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:00.822 03:19:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:00.822 03:19:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:01.081 03:19:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:01.082 03:19:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:01.082 03:19:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:01.082 03:19:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:01.082 03:19:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.082 03:19:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:01.082 03:19:07 -- pm/common@44 -- $ pid=93553 00:30:01.082 03:19:07 -- pm/common@50 -- $ kill -TERM 93553 00:30:01.082 03:19:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:01.082 03:19:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:01.082 03:19:07 -- pm/common@44 -- $ pid=93554 00:30:01.082 03:19:07 -- pm/common@50 -- $ kill -TERM 93554 00:30:01.082 + [[ -n 5155 ]] 00:30:01.082 + sudo kill 5155 00:30:01.092 [Pipeline] } 00:30:01.142 [Pipeline] // timeout 00:30:01.147 [Pipeline] } 00:30:01.165 [Pipeline] // stage 00:30:01.170 [Pipeline] } 00:30:01.187 [Pipeline] // catchError 00:30:01.196 [Pipeline] stage 00:30:01.199 [Pipeline] { (Stop VM) 00:30:01.213 [Pipeline] sh 00:30:01.495 + vagrant halt 00:30:04.781 ==> default: Halting domain... 00:30:11.363 [Pipeline] sh 00:30:11.643 + vagrant destroy -f 00:30:15.839 ==> default: Removing domain... 00:30:15.899 [Pipeline] sh 00:30:16.178 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:16.188 [Pipeline] } 00:30:16.207 [Pipeline] // stage 00:30:16.213 [Pipeline] } 00:30:16.230 [Pipeline] // dir 00:30:16.236 [Pipeline] } 00:30:16.254 [Pipeline] // wrap 00:30:16.261 [Pipeline] } 00:30:16.277 [Pipeline] // catchError 00:30:16.286 [Pipeline] stage 00:30:16.288 [Pipeline] { (Epilogue) 00:30:16.303 [Pipeline] sh 00:30:16.600 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:24.727 [Pipeline] catchError 00:30:24.729 [Pipeline] { 00:30:24.744 [Pipeline] sh 00:30:25.022 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:25.281 Artifacts sizes are good 00:30:25.292 [Pipeline] } 00:30:25.309 [Pipeline] // catchError 00:30:25.320 [Pipeline] archiveArtifacts 00:30:25.327 Archiving artifacts 00:30:25.538 [Pipeline] cleanWs 00:30:25.549 [WS-CLEANUP] Deleting project workspace... 00:30:25.549 [WS-CLEANUP] Deferred wipeout is used... 00:30:25.556 [WS-CLEANUP] done 00:30:25.558 [Pipeline] } 00:30:25.577 [Pipeline] // stage 00:30:25.583 [Pipeline] } 00:30:25.601 [Pipeline] // node 00:30:25.607 [Pipeline] End of Pipeline 00:30:25.643 Finished: SUCCESS